2 \part{Advanced Methods}
3 Before continuing, it might be wise to try some of your own examples in
4 Isabelle, reinforcing your knowledge of the basic functions.
6 Look through {\em Isabelle's Object-Logics\/} and try proving some
7 simple theorems. You probably should begin with first-order logic
8 ({\tt FOL} or~{\tt LK}). Try working some of the examples provided,
9 and others from the literature. Set theory~({\tt ZF}) and
10 Constructive Type Theory~({\tt CTT}) form a richer world for
11 mathematical reasoning and, again, many examples are in the
12 literature. Higher-order logic~({\tt HOL}) is Isabelle's most
13 elaborate logic. Its types and functions are identified with those of
16 Choose a logic that you already understand. Isabelle is a proof
17 tool, not a teaching tool; if you do not know how to do a particular proof
18 on paper, then you certainly will not be able to do it on the machine.
19 Even experienced users plan large proofs on paper.
21 We have covered only the bare essentials of Isabelle, but enough to perform
22 substantial proofs. By occasionally dipping into the {\em Reference
23 Manual}, you can learn additional tactics, subgoal commands and tacticals.
26 \section{Deriving rules in Isabelle}
28 A mathematical development goes through a progression of stages. Each
29 stage defines some concepts and derives rules about them. We shall see how
30 to derive rules, perhaps involving definitions, using Isabelle. The
31 following section will explain how to declare types, constants, rules and
35 \subsection{Deriving a rule using tactics and meta-level assumptions}
36 \label{deriving-example}
37 \index{examples!of deriving rules}\index{assumptions!of main goal}
39 The subgoal module supports the derivation of rules, as discussed in
40 \S\ref{deriving}. The \ttindex{goal} command, when supplied a goal of
41 the form $\List{\theta@1; \ldots; \theta@k} \Imp \phi$, creates
42 $\phi\Imp\phi$ as the initial proof state and returns a list
43 consisting of the theorems ${\theta@i\;[\theta@i]}$, for $i=1$,
44 \ldots,~$k$. These meta-assumptions are also recorded internally,
45 allowing {\tt result} (which is called by {\tt qed}) to discharge them
46 in the original order.
48 Let us derive $\conj$ elimination using Isabelle.
49 Until now, calling {\tt goal} has returned an empty list, which we have
50 thrown away. In this example, the list contains the two premises of the
51 rule. We bind them to the \ML\ identifiers {\tt major} and {\tt
52 minor}:\footnote{Some ML compilers will print a message such as {\em
53 binding not exhaustive}. This warns that {\tt goal} must return a
54 2-element list. Otherwise, the pattern-match will fail; ML will
55 raise exception \xdx{Match}.}
57 val [major,minor] = goal FOL.thy
58 "[| P&Q; [| P; Q |] ==> R |] ==> R";
62 {\out val major = "P & Q [P & Q]" : thm}
63 {\out val minor = "[| P; Q |] ==> R [[| P; Q |] ==> R]" : thm}
65 Look at the minor premise, recalling that meta-level assumptions are
66 shown in brackets. Using {\tt minor}, we reduce $R$ to the subgoals
69 by (resolve_tac [minor] 1);
75 Deviating from~\S\ref{deriving}, we apply $({\conj}E1)$ forwards from the
76 assumption $P\conj Q$ to obtain the theorem~$P\;[P\conj Q]$.
79 {\out val it = "P [P & Q]" : thm}
81 by (resolve_tac [major RS conjunct1] 1);
86 Similarly, we solve the subgoal involving~$Q$.
89 {\out val it = "Q [P & Q]" : thm}
90 by (resolve_tac [major RS conjunct2] 1);
95 Calling \ttindex{topthm} returns the current proof state as a theorem.
96 Note that it contains assumptions. Calling \ttindex{qed} discharges
97 the assumptions --- both occurrences of $P\conj Q$ are discharged as
98 one --- and makes the variables schematic.
101 {\out val it = "R [P & Q, P & Q, [| P; Q |] ==> R]" : thm}
103 {\out val conjE = "[| ?P & ?Q; [| ?P; ?Q |] ==> ?R |] ==> ?R" : thm}
107 \subsection{Definitions and derived rules} \label{definitions}
108 \index{rules!derived}\index{definitions!and derived rules|(}
110 Definitions are expressed as meta-level equalities. Let us define negation
111 and the if-and-only-if connective:
113 \neg \Var{P} & \equiv & \Var{P}\imp\bot \\
114 \Var{P}\bimp \Var{Q} & \equiv &
115 (\Var{P}\imp \Var{Q}) \conj (\Var{Q}\imp \Var{P})
117 \index{meta-rewriting}%
118 Isabelle permits {\bf meta-level rewriting} using definitions such as
119 these. {\bf Unfolding} replaces every instance
120 of $\neg \Var{P}$ by the corresponding instance of ${\Var{P}\imp\bot}$. For
121 example, $\forall x.\neg (P(x)\conj \neg R(x,0))$ unfolds to
122 \[ \forall x.(P(x)\conj R(x,0)\imp\bot)\imp\bot. \]
123 {\bf Folding} a definition replaces occurrences of the right-hand side by
124 the left. The occurrences need not be free in the entire formula.
126 When you define new concepts, you should derive rules asserting their
127 abstract properties, and then forget their definitions. This supports
128 modularity: if you later change the definitions without affecting their
129 abstract properties, then most of your proofs will carry through without
130 change. Indiscriminate unfolding makes a subgoal grow exponentially,
133 Taking this point of view, Isabelle does not unfold definitions
134 automatically during proofs. Rewriting must be explicit and selective.
135 Isabelle provides tactics and meta-rules for rewriting, and a version of
136 the {\tt goal} command that unfolds the conclusion and premises of the rule
139 For example, the intuitionistic definition of negation given above may seem
140 peculiar. Using Isabelle, we shall derive pleasanter negation rules:
141 \[ \infer[({\neg}I)]{\neg P}{\infer*{\bot}{[P]}} \qquad
142 \infer[({\neg}E)]{Q}{\neg P & P} \]
143 This requires proving the following meta-formulae:
144 $$ (P\Imp\bot) \Imp \neg P \eqno(\neg I) $$
145 $$ \List{\neg P; P} \Imp Q. \eqno(\neg E) $$
148 \subsection{Deriving the $\neg$ introduction rule}
149 To derive $(\neg I)$, we may call {\tt goal} with the appropriate
150 formula. Again, {\tt goal} returns a list consisting of the rule's
151 premises. We bind this one-element list to the \ML\ identifier {\tt
154 val prems = goal FOL.thy "(P ==> False) ==> ~P";
158 {\out val prems = ["P ==> False [P ==> False]"] : thm list}
160 Calling \ttindex{rewrite_goals_tac} with \tdx{not_def}, which is the
161 definition of negation, unfolds that definition in the subgoals. It leaves
165 {\out val it = "~?P == ?P --> False" : thm}
166 by (rewrite_goals_tac [not_def]);
169 {\out 1. P --> False}
171 Using \tdx{impI} and the premise, we reduce subgoal~1 to a triviality:
173 by (resolve_tac [impI] 1);
176 {\out 1. P ==> False}
178 by (resolve_tac prems 1);
183 The rest of the proof is routine. Note the form of the final result.
191 {\out val notI = "(?P ==> False) ==> ~?P" : thm}
193 \indexbold{*notI theorem}
195 There is a simpler way of conducting this proof. The \ttindex{goalw}
196 command starts a backward proof, as does {\tt goal}, but it also
197 unfolds definitions. Thus there is no need to call
198 \ttindex{rewrite_goals_tac}:
200 val prems = goalw FOL.thy [not_def]
201 "(P ==> False) ==> ~P";
204 {\out 1. P --> False}
205 {\out val prems = ["P ==> False [P ==> False]"] : thm list}
209 \subsection{Deriving the $\neg$ elimination rule}
210 Let us derive the rule $(\neg E)$. The proof follows that of~{\tt conjE}
211 above, with an additional step to unfold negation in the major premise.
212 Although the {\tt goalw} command is best for this, let us
213 try~{\tt goal} to see another way of unfolding definitions. After
214 binding the premises to \ML\ identifiers, we apply \tdx{FalseE}:
216 val [major,minor] = goal FOL.thy "[| ~P; P |] ==> R";
220 {\out val major = "~ P [~ P]" : thm}
221 {\out val minor = "P [P]" : thm}
223 by (resolve_tac [FalseE] 1);
228 Everything follows from falsity. And we can prove falsity using the
229 premises and Modus Ponens:
231 by (resolve_tac [mp] 1);
234 {\out 1. ?P1 --> False}
237 For subgoal~1, we transform the major premise from~$\neg P$
238 to~${P\imp\bot}$. The function \ttindex{rewrite_rule}, given a list of
239 definitions, unfolds them in a theorem. Rewriting does not
240 affect the theorem's hypothesis, which remains~$\neg P$:
242 rewrite_rule [not_def] major;
243 {\out val it = "P --> False [~P]" : thm}
244 by (resolve_tac [it] 1);
249 The subgoal {\tt?P1} has been instantiated to~{\tt P}, which we can prove
250 using the minor premise:
252 by (resolve_tac [minor] 1);
257 {\out val notE = "[| ~?P; ?P |] ==> ?R" : thm}
259 \indexbold{*notE theorem}
262 Again, there is a simpler way of conducting this proof. Recall that
263 the \ttindex{goalw} command unfolds definitions the conclusion; it also
264 unfolds definitions in the premises:
266 val [major,minor] = goalw FOL.thy [not_def]
268 {\out val major = "P --> False [~ P]" : thm}
269 {\out val minor = "P [P]" : thm}
271 Observe the difference in {\tt major}; the premises are unfolded without
272 calling~\ttindex{rewrite_rule}. Incidentally, the four calls to
273 \ttindex{resolve_tac} above can be collapsed to one, with the help
274 of~\ttindex{RS}; this is a typical example of forward reasoning from a
277 minor RS (major RS mp RS FalseE);
278 {\out val it = "?P [P, ~P]" : thm}
279 by (resolve_tac [it] 1);
284 \index{definitions!and derived rules|)}
286 \goodbreak\medskip\index{*"!"! symbol!in main goal}
287 Finally, here is a trick that is sometimes useful. If the goal
288 has an outermost meta-quantifier, then \ttindex{goal} and \ttindex{goalw}
289 do not return the rule's premises in the list of theorems; instead, the
290 premises become assumptions in subgoal~1.
291 %%%It does not matter which variables are quantified over.
293 goalw FOL.thy [not_def] "!!P R. [| ~P; P |] ==> R";
295 {\out !!P R. [| ~ P; P |] ==> R}
296 {\out 1. !!P R. [| P --> False; P |] ==> R}
297 val it = [] : thm list
299 The proof continues as before. But instead of referring to \ML\
300 identifiers, we refer to assumptions using {\tt eresolve_tac} or
303 by (resolve_tac [FalseE] 1);
305 {\out !!P R. [| ~ P; P |] ==> R}
306 {\out 1. !!P R. [| P --> False; P |] ==> False}
308 by (eresolve_tac [mp] 1);
310 {\out !!P R. [| ~ P; P |] ==> R}
311 {\out 1. !!P R. P ==> P}
315 {\out !!P R. [| ~ P; P |] ==> R}
318 Calling \ttindex{qed} strips the meta-quantifiers, so the resulting
319 theorem is the same as before.
322 {\out val notE = "[| ~?P; ?P |] ==> ?R" : thm}
324 Do not use the {\tt!!}\ trick if the premises contain meta-level
325 connectives, because \ttindex{eresolve_tac} and \ttindex{assume_tac} would
326 not be able to handle the resulting assumptions. The trick is not suitable
327 for deriving the introduction rule~$(\neg I)$.
330 \section{Defining theories}\label{sec:defining-theories}
331 \index{theories!defining|(}
333 Isabelle makes no distinction between simple extensions of a logic ---
334 like specifying a type~$bool$ with constants~$true$ and~$false$ ---
335 and defining an entire logic. A theory definition has a form like
337 \(T\) = \(S@1\) + \(\cdots\) + \(S@n\) +
338 classes {\it class declarations}
340 types {\it type declarations and synonyms}
341 arities {\it type arity declarations}
342 consts {\it constant declarations}
343 syntax {\it syntactic constant declarations}
344 translations {\it ast translation rules}
345 defs {\it meta-logical definitions}
346 rules {\it rule declarations}
350 This declares the theory $T$ to extend the existing theories
351 $S@1$,~\ldots,~$S@n$. It may introduce new classes, types, arities
352 (of existing types), constants and rules; it can specify the default
353 sort for type variables. A constant declaration can specify an
354 associated concrete syntax. The translations section specifies
355 rewrite rules on abstract syntax trees, handling notations and
356 abbreviations. \index{*ML section} The {\tt ML} section may contain
357 code to perform arbitrary syntactic transformations. The main
358 declaration forms are discussed below. The full syntax can be found
359 in \iflabelundefined{app:TheorySyntax}{an appendix of the {\it
360 Reference Manual}}{App.\ts\ref{app:TheorySyntax}}. Also note that
361 object-logics may add further theory sections, for example {\tt
362 typedef}, {\tt datatype} in \HOL.
364 All the declaration parts can be omitted or repeated and may appear in
365 any order, except that the {\ML} section must be last (after the {\tt
366 end} keyword). In the simplest case, $T$ is just the union of
367 $S@1$,~\ldots,~$S@n$. New theories always extend one or more other
368 theories, inheriting their types, constants, syntax, etc. The theory
369 \thydx{Pure} contains nothing but Isabelle's meta-logic. The variant
370 \thydx{CPure} offers the more usual higher-order function application
371 syntax $t\,u@1\ldots\,u@n$ instead of $t(u@1,\ldots,u@n)$ in \Pure.
373 Each theory definition must reside in a separate file, whose name is
374 the theory's with {\tt.thy} appended. Calling
375 \ttindexbold{use_thy}~{\tt"{\it T\/}"} reads the definition from {\it
376 T}{\tt.thy}, writes a corresponding file of {\ML} code {\tt.{\it
377 T}.thy.ML}, reads the latter file, and deletes it if no errors
378 occurred. This declares the {\ML} structure~$T$, which contains a
379 component {\tt thy} denoting the new theory, a component for each
380 rule, and everything declared in {\it ML code}.
382 Errors may arise during the translation to {\ML} (say, a misspelled
383 keyword) or during creation of the new theory (say, a type error in a
384 rule). But if all goes well, {\tt use_thy} will finally read the file
385 {\it T}{\tt.ML} (if it exists). This file typically contains proofs
386 that refer to the components of~$T$. The structure is automatically
387 opened, so its components may be referred to by unqualified names,
388 e.g.\ just {\tt thy} instead of $T${\tt .thy}.
390 \ttindexbold{use_thy} automatically loads a theory's parents before
391 loading the theory itself. When a theory file is modified, many
392 theories may have to be reloaded. Isabelle records the modification
393 times and dependencies of theory files. See
394 \iflabelundefined{sec:reloading-theories}{the {\em Reference Manual\/}}%
395 {\S\ref{sec:reloading-theories}}
399 \subsection{Declaring constants, definitions and rules}
400 \indexbold{constants!declaring}\index{rules!declaring}
402 Most theories simply declare constants, definitions and rules. The {\bf
403 constant declaration part} has the form
405 consts \(c@1\) :: \(\tau@1\)
407 \(c@n\) :: \(\tau@n\)
409 where $c@1$, \ldots, $c@n$ are constants and $\tau@1$, \ldots, $\tau@n$ are
410 types. The types must be enclosed in quotation marks if they contain
411 user-declared infix type constructors like {\tt *}. Each
412 constant must be enclosed in quotation marks unless it is a valid
413 identifier. To declare $c@1$, \ldots, $c@n$ as constants of type $\tau$,
414 the $n$ declarations may be abbreviated to a single line:
416 \(c@1\), \ldots, \(c@n\) :: \(\tau\)
418 The {\bf rule declaration part} has the form
420 rules \(id@1\) "\(rule@1\)"
422 \(id@n\) "\(rule@n\)"
424 where $id@1$, \ldots, $id@n$ are \ML{} identifiers and $rule@1$, \ldots,
425 $rule@n$ are expressions of type~$prop$. Each rule {\em must\/} be
426 enclosed in quotation marks.
428 \indexbold{definitions} The {\bf definition part} is similar, but with
429 the keyword {\tt defs} instead of {\tt rules}. {\bf Definitions} are
430 rules of the form $s \equiv t$, and should serve only as
431 abbreviations. The simplest form of a definition is $f \equiv t$,
432 where $f$ is a constant. Also allowed are $\eta$-equivalent forms of
433 this, where the arguments of~$f$ appear applied on the left-hand side
434 of the equation instead of abstracted on the right-hand side.
436 Isabelle checks for common errors in definitions, such as extra
437 variables on the right-hand side, but currently does not a complete
438 test of well-formedness. Thus determined users can write
439 non-conservative `definitions' by using mutual recursion, for example;
440 the consequences of such actions are their responsibility.
442 \index{examples!of theories} This example theory extends first-order
443 logic by declaring and defining two constants, {\em nand} and {\em
447 consts nand,xor :: [o,o] => o
448 defs nand_def "nand(P,Q) == ~(P & Q)"
449 xor_def "xor(P,Q) == P & ~Q | ~P & Q"
453 Declaring and defining constants can be combined:
456 constdefs nand :: [o,o] => o
457 "nand(P,Q) == ~(P & Q)"
459 "xor(P,Q) == P & ~Q | ~P & Q"
462 {\tt constdefs} generates the names {\tt nand_def} and {\tt xor_def}
463 automatically, which is why it is restricted to alphanumeric identifiers. In
464 general it has the form
466 constdefs \(id@1\) :: \(\tau@1\)
467 "\(id@1 \equiv \dots\)"
469 \(id@n\) :: \(\tau@n\)
470 "\(id@n \equiv \dots\)"
475 A common mistake when writing definitions is to introduce extra free variables
476 on the right-hand side as in the following fictitious definition:
478 defs prime_def "prime(p) == (m divides p) --> (m=1 | m=p)"
480 Isabelle rejects this ``definition'' because of the extra {\tt m} on the
481 right-hand side, which would introduce an inconsistency. What you should have
484 defs prime_def "prime(p) == ALL m. (m divides p) --> (m=1 | m=p)"
488 \subsection{Declaring type constructors}
489 \indexbold{types!declaring}\indexbold{arities!declaring}
491 Types are composed of type variables and {\bf type constructors}. Each
492 type constructor takes a fixed number of arguments. They are declared
493 with an \ML-like syntax. If $list$ takes one type argument, $tree$ takes
494 two arguments and $nat$ takes no arguments, then these type constructors
502 The {\bf type declaration part} has the general form
504 types \(tids@1\) \(id@1\)
508 where $id@1$, \ldots, $id@n$ are identifiers and $tids@1$, \ldots, $tids@n$
509 are type argument lists as shown in the example above. It declares each
510 $id@i$ as a type constructor with the specified number of argument places.
512 The {\bf arity declaration part} has the form
514 arities \(tycon@1\) :: \(arity@1\)
516 \(tycon@n\) :: \(arity@n\)
518 where $tycon@1$, \ldots, $tycon@n$ are identifiers and $arity@1$, \ldots,
519 $arity@n$ are arities. Arity declarations add arities to existing
520 types; they do not declare the types themselves.
521 In the simplest case, for an 0-place type constructor, an arity is simply
522 the type's class. Let us declare a type~$bool$ of class $term$, with
523 constants $tt$ and~$ff$. (In first-order logic, booleans are
524 distinct from formulae, which have type $o::logic$.)
525 \index{examples!of theories}
533 A $k$-place type constructor may have arities of the form
534 $(s@1,\ldots,s@k)c$, where $s@1,\ldots,s@n$ are sorts and $c$ is a class.
535 Each sort specifies a type argument; it has the form $\{c@1,\ldots,c@m\}$,
536 where $c@1$, \dots,~$c@m$ are classes. Mostly we deal with singleton
537 sorts, and may abbreviate them by dropping the braces. The arity
538 $(term)term$ is short for $(\{term\})term$. Recall the discussion in
541 A type constructor may be overloaded (subject to certain conditions) by
542 appearing in several arity declarations. For instance, the function type
543 constructor~$fun$ has the arity $(logic,logic)logic$; in higher-order
544 logic, it is declared also to have arity $(term,term)term$.
546 Theory {\tt List} declares the 1-place type constructor $list$, gives
547 it arity $(term)term$, and declares constants $Nil$ and $Cons$ with
549 \footnote{In the {\tt consts} part, type variable {\tt'a} has the default
550 sort, which is {\tt term}. See the {\em Reference Manual\/}
551 \iflabelundefined{sec:ref-defining-theories}{}%
552 {(\S\ref{sec:ref-defining-theories})} for more information.}
553 \index{examples!of theories}
557 arities list :: (term)term
558 consts Nil :: 'a list
559 Cons :: ['a, 'a list] => 'a list
562 Multiple arity declarations may be abbreviated to a single line:
564 arities \(tycon@1\), \ldots, \(tycon@n\) :: \(arity\)
568 %Arity declarations resemble constant declarations, but there are {\it no\/}
569 %quotation marks! Types and rules must be quoted because the theory
570 %translator passes them verbatim to the {\ML} output file.
573 \subsection{Type synonyms}\indexbold{type synonyms}
574 Isabelle supports {\bf type synonyms} ({\bf abbreviations}) which are similar
575 to those found in \ML. Such synonyms are defined in the type declaration part
576 and are fairly self explanatory:
578 types gate = [o,o] => o
580 ('a,'b)nuf = 'b => 'a
582 Type declarations and synonyms can be mixed arbitrarily:
585 'a stream = nat => 'a
589 A synonym is merely an abbreviation for some existing type expression.
590 Hence synonyms may not be recursive! Internally all synonyms are
591 fully expanded. As a consequence Isabelle output never contains
592 synonyms. Their main purpose is to improve the readability of theory
593 definitions. Synonyms can be used just like any other type:
595 consts and,or :: gate
596 negate :: signal => signal
599 \subsection{Infix and mixfix operators}
600 \index{infixes}\index{examples!of theories}
602 Infix or mixfix syntax may be attached to constants. Consider the
606 consts "~&" :: [o,o] => o (infixl 35)
607 "#" :: [o,o] => o (infixl 30)
608 defs nand_def "P ~& Q == ~(P & Q)"
609 xor_def "P # Q == P & ~Q | ~P & Q"
612 The constant declaration part declares two left-associating infix operators
613 with their priorities, or precedences; they are $\nand$ of priority~35 and
614 $\xor$ of priority~30. Hence $P \xor Q \xor R$ is parsed as $(P\xor Q)
615 \xor R$ and $P \xor Q \nand R$ as $P \xor (Q \nand R)$. Note the quotation
616 marks in \verb|"~&"| and \verb|"#"|.
618 The constants \hbox{\verb|op ~&|} and \hbox{\verb|op #|} are declared
619 automatically, just as in \ML. Hence you may write propositions like
620 \verb|op #(True) == op ~&(True)|, which asserts that the functions $\lambda
621 Q.True \xor Q$ and $\lambda Q.True \nand Q$ are identical.
623 \bigskip\index{mixfix declarations}
624 {\bf Mixfix} operators may have arbitrary context-free syntaxes. Let us
625 add a line to the constant declaration part:
627 If :: [o,o,o] => o ("if _ then _ else _")
629 This declares a constant $If$ of type $[o,o,o] \To o$ with concrete syntax {\tt
630 if~$P$ then~$Q$ else~$R$} as well as {\tt If($P$,$Q$,$R$)}. Underscores
631 denote argument positions.
633 The declaration above does not allow the {\tt if}-{\tt then}-{\tt
634 else} construct to be printed split across several lines, even if it
635 is too long to fit on one line. Pretty-printing information can be
636 added to specify the layout of mixfix operators. For details, see
637 \iflabelundefined{Defining-Logics}%
638 {the {\it Reference Manual}, chapter `Defining Logics'}%
639 {Chap.\ts\ref{Defining-Logics}}.
641 Mixfix declarations can be annotated with priorities, just like
642 infixes. The example above is just a shorthand for
644 If :: [o,o,o] => o ("if _ then _ else _" [0,0,0] 1000)
646 The numeric components determine priorities. The list of integers
647 defines, for each argument position, the minimal priority an expression
648 at that position must have. The final integer is the priority of the
649 construct itself. In the example above, any argument expression is
650 acceptable because priorities are non-negative, and conditionals may
651 appear everywhere because 1000 is the highest priority. On the other
652 hand, the declaration
654 If :: [o,o,o] => o ("if _ then _ else _" [100,0,0] 99)
656 defines concrete syntax for a conditional whose first argument cannot have
657 the form {\tt if~$P$ then~$Q$ else~$R$} because it must have a priority
658 of at least~100. We may of course write
660 if (if $P$ then $Q$ else $R$) then $S$ else $T$
662 because expressions in parentheses have maximal priority.
664 Binary type constructors, like products and sums, may also be declared as
665 infixes. The type declaration below introduces a type constructor~$*$ with
666 infix notation $\alpha*\beta$, together with the mixfix notation
667 ${<}\_,\_{>}$ for pairs. We also see a rule declaration part.
668 \index{examples!of theories}\index{mixfix declarations}
671 types ('a,'b) "*" (infixl 20)
672 arities "*" :: (term,term)term
673 consts fst :: "'a * 'b => 'a"
674 snd :: "'a * 'b => 'b"
675 Pair :: "['a,'b] => 'a * 'b" ("(1<_,/_>)")
676 rules fst "fst(<a,b>) = a"
682 The name of the type constructor is~{\tt *} and not {\tt op~*}, as
683 it would be in the case of an infix constant. Only infix type
684 constructors can have symbolic names like~{\tt *}. General mixfix
685 syntax for types may be introduced via appropriate {\tt syntax}
690 \subsection{Overloading}
691 \index{overloading}\index{examples!of theories}
692 The {\bf class declaration part} has the form
694 classes \(id@1\) < \(c@1\)
698 where $id@1$, \ldots, $id@n$ are identifiers and $c@1$, \ldots, $c@n$ are
699 existing classes. It declares each $id@i$ as a new class, a subclass
700 of~$c@i$. In the general case, an identifier may be declared to be a
701 subclass of $k$ existing classes:
703 \(id\) < \(c@1\), \ldots, \(c@k\)
705 Type classes allow constants to be overloaded. As suggested in
706 \S\ref{polymorphic}, let us define the class $arith$ of arithmetic
707 types with the constants ${+} :: [\alpha,\alpha]\To \alpha$ and $0,1 {::}
708 \alpha$, for $\alpha{::}arith$. We introduce $arith$ as a subclass of
709 $term$ and add the three polymorphic constants of this class.
710 \index{examples!of theories}\index{constants!overloaded}
714 consts "0" :: 'a::arith ("0")
715 "1" :: 'a::arith ("1")
716 "+" :: ['a::arith,'a] => 'a (infixl 60)
719 No rules are declared for these constants: we merely introduce their
720 names without specifying properties. On the other hand, classes
721 with rules make it possible to prove {\bf generic} theorems. Such
722 theorems hold for all instances, all types in that class.
724 We can now obtain distinct versions of the constants of $arith$ by
725 declaring certain types to be of class $arith$. For example, let us
726 declare the 0-place type constructors $bool$ and $nat$:
727 \index{examples!of theories}
731 arities bool, nat :: arith
732 consts Suc :: nat=>nat
734 rules add0 "0 + n = n::nat"
735 addS "Suc(m)+n = Suc(m+n)"
737 or0l "0 + x = x::bool"
738 or0r "x + 0 = x::bool"
739 or1l "1 + x = 1::bool"
740 or1r "x + 1 = 1::bool"
743 Because $nat$ and $bool$ have class $arith$, we can use $0$, $1$ and $+$ at
744 either type. The type constraints in the axioms are vital. Without
745 constraints, the $x$ in $1+x = x$ would have type $\alpha{::}arith$
746 and the axiom would hold for any type of class $arith$. This would
747 collapse $nat$ to a trivial type:
748 \[ Suc(1) = Suc(0+1) = Suc(0)+1 = 1+1 = 1! \]
751 \section{Theory example: the natural numbers}
753 We shall now work through a small example of formalized mathematics
754 demonstrating many of the theory extension features.
757 \subsection{Extending first-order logic with the natural numbers}
758 \index{examples!of theories}
760 Section\ts\ref{sec:logical-syntax} has formalized a first-order logic,
761 including a type~$nat$ and the constants $0::nat$ and $Suc::nat\To nat$.
762 Let us introduce the Peano axioms for mathematical induction and the
763 freeness of $0$ and~$Suc$:\index{axioms!Peano}
764 \[ \vcenter{\infer[(induct)]{P[n/x]}{P[0/x] & \infer*{P[Suc(x)/x]}{[P]}}}
765 \qquad \parbox{4.5cm}{provided $x$ is not free in any assumption except~$P$}
767 \[ \infer[(Suc\_inject)]{m=n}{Suc(m)=Suc(n)} \qquad
768 \infer[(Suc\_neq\_0)]{R}{Suc(m)=0}
770 Mathematical induction asserts that $P(n)$ is true, for any $n::nat$,
771 provided $P(0)$ holds and that $P(x)$ implies $P(Suc(x))$ for all~$x$.
772 Some authors express the induction step as $\forall x. P(x)\imp P(Suc(x))$.
773 To avoid making induction require the presence of other connectives, we
774 formalize mathematical induction as
775 $$ \List{P(0); \Forall x. P(x)\Imp P(Suc(x))} \Imp P(n). \eqno(induct) $$
778 Similarly, to avoid expressing the other rules using~$\forall$, $\imp$
779 and~$\neg$, we take advantage of the meta-logic;\footnote
780 {On the other hand, the axioms $Suc(m)=Suc(n) \bimp m=n$
781 and $\neg(Suc(m)=0)$ are logically equivalent to those given, and work
782 better with Isabelle's simplifier.}
784 an elimination rule for $Suc(m)=0$:
785 $$ Suc(m)=Suc(n) \Imp m=n \eqno(Suc\_inject) $$
786 $$ Suc(m)=0 \Imp R \eqno(Suc\_neq\_0) $$
789 We shall also define a primitive recursion operator, $rec$. Traditionally,
790 primitive recursion takes a natural number~$a$ and a 2-place function~$f$,
791 and obeys the equations
793 rec(0,a,f) & = & a \\
794 rec(Suc(m),a,f) & = & f(m, rec(m,a,f))
796 Addition, defined by $m+n \equiv rec(m,n,\lambda x\,y.Suc(y))$,
800 Suc(m)+n & = & Suc(m+n)
802 Primitive recursion appears to pose difficulties: first-order logic has no
803 function-valued expressions. We again take advantage of the meta-logic,
804 which does have functions. We also generalise primitive recursion to be
805 polymorphic over any type of class~$term$, and declare the addition
808 rec & :: & [nat, \alpha{::}term, [nat,\alpha]\To\alpha] \To\alpha \\
809 + & :: & [nat,nat]\To nat
813 \subsection{Declaring the theory to Isabelle}
814 \index{examples!of theories}
815 Let us create the theory \thydx{Nat} starting from theory~\verb$FOL$,
816 which contains only classical logic with no natural numbers. We declare
817 the 0-place type constructor $nat$ and the associated constants. Note that
818 the constant~0 requires a mixfix annotation because~0 is not a legal
819 identifier, and could not otherwise be written in terms:
820 \begin{ttbox}\index{mixfix declarations}
824 consts "0" :: nat ("0")
826 rec :: [nat, 'a, [nat,'a]=>'a] => 'a
827 "+" :: [nat, nat] => nat (infixl 60)
828 rules Suc_inject "Suc(m)=Suc(n) ==> m=n"
829 Suc_neq_0 "Suc(m)=0 ==> R"
830 induct "[| P(0); !!x. P(x) ==> P(Suc(x)) |] ==> P(n)"
831 rec_0 "rec(0,a,f) = a"
832 rec_Suc "rec(Suc(m), a, f) = f(m, rec(m,a,f))"
833 add_def "m+n == rec(m, n, \%x y. Suc(y))"
836 In axiom {\tt add_def}, recall that \verb|%| stands for~$\lambda$.
837 Loading this theory file creates the \ML\ structure {\tt Nat}, which
838 contains the theory and axioms.
840 \subsection{Proving some recursion equations}
841 File {\tt FOL/ex/Nat.ML} contains proofs involving this theory of the
842 natural numbers. As a trivial example, let us derive recursion equations
843 for \verb$+$. Here is the zero case:
845 goalw Nat.thy [add_def] "0+n = n";
848 {\out 1. rec(0,n,\%x y. Suc(y)) = n}
850 by (resolve_tac [rec_0] 1);
856 And here is the successor case:
858 goalw Nat.thy [add_def] "Suc(m)+n = Suc(m+n)";
860 {\out Suc(m) + n = Suc(m + n)}
861 {\out 1. rec(Suc(m),n,\%x y. Suc(y)) = Suc(rec(m,n,\%x y. Suc(y)))}
863 by (resolve_tac [rec_Suc] 1);
865 {\out Suc(m) + n = Suc(m + n)}
869 The induction rule raises some complications, which are discussed next.
870 \index{theories!defining|)}
873 \section{Refinement with explicit instantiation}
874 \index{resolution!with instantiation}
875 \index{instantiation|(}
877 In order to employ mathematical induction, we need to refine a subgoal by
878 the rule~$(induct)$. The conclusion of this rule is $\Var{P}(\Var{n})$,
879 which is highly ambiguous in higher-order unification. It matches every
880 way that a formula can be regarded as depending on a subterm of type~$nat$.
881 To get round this problem, we could make the induction rule conclude
882 $\forall n.\Var{P}(n)$ --- but putting a subgoal into this form requires
883 refinement by~$(\forall E)$, which is equally hard!
885 The tactic {\tt res_inst_tac}, like {\tt resolve_tac}, refines a subgoal by
886 a rule. But it also accepts explicit instantiations for the rule's
889 \item[\ttindex{res_inst_tac} {\it insts} {\it thm} {\it i}]
890 instantiates the rule {\it thm} with the instantiations {\it insts}, and
891 then performs resolution on subgoal~$i$.
893 \item[\ttindex{eres_inst_tac}]
894 and \ttindex{dres_inst_tac} are similar, but perform elim-resolution
895 and destruct-resolution, respectively.
897 The list {\it insts} consists of pairs $[(v@1,e@1), \ldots, (v@n,e@n)]$,
898 where $v@1$, \ldots, $v@n$ are names of schematic variables in the rule ---
899 with no leading question marks! --- and $e@1$, \ldots, $e@n$ are
900 expressions giving their instantiations. The expressions are type-checked
901 in the context of a particular subgoal: free variables receive the same
902 types as they have in the subgoal, and parameters may appear. Type
903 variable instantiations may appear in~{\it insts}, but they are seldom
904 required: {\tt res_inst_tac} instantiates type variables automatically
905 whenever the type of~$e@i$ is an instance of the type of~$\Var{v@i}$.
907 \subsection{A simple proof by induction}
908 \index{examples!of induction}
909 Let us prove that no natural number~$k$ equals its own successor. To
910 use~$(induct)$, we instantiate~$\Var{n}$ to~$k$; Isabelle finds a good
911 instantiation for~$\Var{P}$.
913 goal Nat.thy "~ (Suc(k) = k)";
916 {\out 1. Suc(k) ~= k}
918 by (res_inst_tac [("n","k")] induct 1);
921 {\out 1. Suc(0) ~= 0}
922 {\out 2. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}
924 We should check that Isabelle has correctly applied induction. Subgoal~1
925 is the base case, with $k$ replaced by~0. Subgoal~2 is the inductive step,
926 with $k$ replaced by~$Suc(x)$ and with an induction hypothesis for~$x$.
927 The rest of the proof demonstrates~\tdx{notI}, \tdx{notE} and the
928 other rules of theory {\tt Nat}. The base case holds by~\ttindex{Suc_neq_0}:
930 by (resolve_tac [notI] 1);
933 {\out 1. Suc(0) = 0 ==> False}
934 {\out 2. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}
936 by (eresolve_tac [Suc_neq_0] 1);
939 {\out 1. !!x. Suc(x) ~= x ==> Suc(Suc(x)) ~= Suc(x)}
941 The inductive step holds by the contrapositive of~\ttindex{Suc_inject}.
942 Negation rules transform the subgoal into that of proving $Suc(x)=x$ from
943 $Suc(Suc(x)) = Suc(x)$:
945 by (resolve_tac [notI] 1);
948 {\out 1. !!x. [| Suc(x) ~= x; Suc(Suc(x)) = Suc(x) |] ==> False}
950 by (eresolve_tac [notE] 1);
953 {\out 1. !!x. Suc(Suc(x)) = Suc(x) ==> Suc(x) = x}
955 by (eresolve_tac [Suc_inject] 1);
962 \subsection{An example of ambiguity in {\tt resolve_tac}}
963 \index{examples!of induction}\index{unification!higher-order}
964 If you try the example above, you may observe that {\tt res_inst_tac} is
965 not actually needed. Almost by chance, \ttindex{resolve_tac} finds the right
966 instantiation for~$(induct)$ to yield the desired next state. With more
967 complex formulae, our luck fails.
969 goal Nat.thy "(k+m)+n = k+(m+n)";
971 {\out k + m + n = k + (m + n)}
972 {\out 1. k + m + n = k + (m + n)}
974 by (resolve_tac [induct] 1);
976 {\out k + m + n = k + (m + n)}
977 {\out 1. k + m + n = 0}
978 {\out 2. !!x. k + m + n = x ==> k + m + n = Suc(x)}
980 This proof requires induction on~$k$. The occurrence of~0 in subgoal~1
981 indicates that induction has been applied to the term~$k+(m+n)$; this
982 application is sound but will not lead to a proof here. Fortunately,
983 Isabelle can (lazily!) generate all the valid applications of induction.
984 The \ttindex{back} command causes backtracking to an alternative outcome of
989 {\out k + m + n = k + (m + n)}
990 {\out 1. k + m + n = k + 0}
991 {\out 2. !!x. k + m + n = k + x ==> k + m + n = k + Suc(x)}
993 Now induction has been applied to~$m+n$. This is equally useless. Let us
994 call \ttindex{back} again.
998 {\out k + m + n = k + (m + n)}
999 {\out 1. k + m + 0 = k + (m + 0)}
1000 {\out 2. !!x. k + m + x = k + (m + x) ==>}
1001 {\out k + m + Suc(x) = k + (m + Suc(x))}
1003 Now induction has been applied to~$n$. What is the next alternative?
1007 {\out k + m + n = k + (m + n)}
1008 {\out 1. k + m + n = k + (m + 0)}
1009 {\out 2. !!x. k + m + n = k + (m + x) ==> k + m + n = k + (m + Suc(x))}
1011 Inspecting subgoal~1 reveals that induction has been applied to just the
1012 second occurrence of~$n$. This perfectly legitimate induction is useless
1015 The main goal admits fourteen different applications of induction. The
1016 number is exponential in the size of the formula.
1018 \subsection{Proving that addition is associative}
1019 Let us invoke the induction rule properly, using~{\tt
1020 res_inst_tac}. At the same time, we shall have a glimpse at Isabelle's
1021 simplification tactics, which are described in
1022 \iflabelundefined{simp-chap}%
1023 {the {\em Reference Manual}}{Chap.\ts\ref{simp-chap}}.
1025 \index{simplification}\index{examples!of simplification}
1027 Isabelle's simplification tactics repeatedly apply equations to a subgoal,
1028 perhaps proving it. For efficiency, the rewrite rules must be
1029 packaged into a {\bf simplification set},\index{simplification sets}
1030 or {\bf simpset}. We take the standard simpset for first-order logic and
1031 insert the equations proved in the previous section, namely
1032 $0+n=n$ and ${\tt Suc}(m)+n={\tt Suc}(m+n)$:
1034 val add_ss = FOL_ss addsimps [add_0, add_Suc];
1036 We state the goal for associativity of addition, and
1037 use \ttindex{res_inst_tac} to invoke induction on~$k$:
1039 goal Nat.thy "(k+m)+n = k+(m+n)";
1041 {\out k + m + n = k + (m + n)}
1042 {\out 1. k + m + n = k + (m + n)}
1044 by (res_inst_tac [("n","k")] induct 1);
1046 {\out k + m + n = k + (m + n)}
1047 {\out 1. 0 + m + n = 0 + (m + n)}
1048 {\out 2. !!x. x + m + n = x + (m + n) ==>}
1049 {\out Suc(x) + m + n = Suc(x) + (m + n)}
1051 The base case holds easily; both sides reduce to $m+n$. The
1052 tactic~\ttindex{simp_tac} rewrites with respect to the given simplification
1053 set, applying the rewrite rules for addition:
1055 by (simp_tac add_ss 1);
1057 {\out k + m + n = k + (m + n)}
1058 {\out 1. !!x. x + m + n = x + (m + n) ==>}
1059 {\out Suc(x) + m + n = Suc(x) + (m + n)}
1061 The inductive step requires rewriting by the equations for addition
1062 together the induction hypothesis, which is also an equation. The
1063 tactic~\ttindex{asm_simp_tac} rewrites using a simplification set and any
1066 by (asm_simp_tac add_ss 1);
1068 {\out k + m + n = k + (m + n)}
1071 \index{instantiation|)}
1074 \section{A Prolog interpreter}
1075 \index{Prolog interpreter|bold}
1076 To demonstrate the power of tacticals, let us construct a Prolog
1077 interpreter and execute programs involving lists.\footnote{To run these
1078 examples, see the file {\tt FOL/ex/Prolog.ML}.} The Prolog program
1079 consists of a theory. We declare a type constructor for lists, with an
1080 arity declaration to say that $(\tau)list$ is of class~$term$
1083 list & :: & (term)term
1085 We declare four constants: the empty list~$Nil$; the infix list
1086 constructor~{:}; the list concatenation predicate~$app$; the list reverse
1087 predicate~$rev$. (In Prolog, functions on lists are expressed as
1090 Nil & :: & \alpha list \\
1091 {:} & :: & [\alpha,\alpha list] \To \alpha list \\
1092 app & :: & [\alpha list,\alpha list,\alpha list] \To o \\
1093 rev & :: & [\alpha list,\alpha list] \To o
1095 The predicate $app$ should satisfy the Prolog-style rules
1096 \[ {app(Nil,ys,ys)} \qquad
1097 {app(xs,ys,zs) \over app(x:xs, ys, x:zs)} \]
1098 We define the naive version of $rev$, which calls~$app$:
1099 \[ {rev(Nil,Nil)} \qquad
1100 {rev(xs,ys)\quad app(ys, x:Nil, zs) \over
1104 \index{examples!of theories}
1105 Theory \thydx{Prolog} extends first-order logic in order to make use
1106 of the class~$term$ and the type~$o$. The interpreter does not use the
1111 arities list :: (term)term
1112 consts Nil :: 'a list
1113 ":" :: ['a, 'a list]=> 'a list (infixr 60)
1114 app :: ['a list, 'a list, 'a list] => o
1115 rev :: ['a list, 'a list] => o
1116 rules appNil "app(Nil,ys,ys)"
1117 appCons "app(xs,ys,zs) ==> app(x:xs, ys, x:zs)"
1118 revNil "rev(Nil,Nil)"
1119 revCons "[| rev(xs,ys); app(ys,x:Nil,zs) |] ==> rev(x:xs,zs)"
1122 \subsection{Simple executions}
1123 Repeated application of the rules solves Prolog goals. Let us
1124 append the lists $[a,b,c]$ and~$[d,e]$. As the rules are applied, the
1125 answer builds up in~{\tt ?x}.
1127 goal Prolog.thy "app(a:b:c:Nil, d:e:Nil, ?x)";
1129 {\out app(a : b : c : Nil, d : e : Nil, ?x)}
1130 {\out 1. app(a : b : c : Nil, d : e : Nil, ?x)}
1132 by (resolve_tac [appNil,appCons] 1);
1134 {\out app(a : b : c : Nil, d : e : Nil, a : ?zs1)}
1135 {\out 1. app(b : c : Nil, d : e : Nil, ?zs1)}
1137 by (resolve_tac [appNil,appCons] 1);
1139 {\out app(a : b : c : Nil, d : e : Nil, a : b : ?zs2)}
1140 {\out 1. app(c : Nil, d : e : Nil, ?zs2)}
1142 At this point, the first two elements of the result are~$a$ and~$b$.
1144 by (resolve_tac [appNil,appCons] 1);
1146 {\out app(a : b : c : Nil, d : e : Nil, a : b : c : ?zs3)}
1147 {\out 1. app(Nil, d : e : Nil, ?zs3)}
1149 by (resolve_tac [appNil,appCons] 1);
1151 {\out app(a : b : c : Nil, d : e : Nil, a : b : c : d : e : Nil)}
1155 Prolog can run functions backwards. Which list can be appended
1156 with $[c,d]$ to produce $[a,b,c,d]$?
1157 Using \ttindex{REPEAT}, we find the answer at once, $[a,b]$:
1159 goal Prolog.thy "app(?x, c:d:Nil, a:b:c:d:Nil)";
1161 {\out app(?x, c : d : Nil, a : b : c : d : Nil)}
1162 {\out 1. app(?x, c : d : Nil, a : b : c : d : Nil)}
1164 by (REPEAT (resolve_tac [appNil,appCons] 1));
1166 {\out app(a : b : Nil, c : d : Nil, a : b : c : d : Nil)}
1171 \subsection{Backtracking}\index{backtracking!Prolog style}
1172 Prolog backtracking can answer questions that have multiple solutions.
1173 Which lists $x$ and $y$ can be appended to form the list $[a,b,c,d]$? This
1174 question has five solutions. Using \ttindex{REPEAT} to apply the rules, we
1175 quickly find the first solution, namely $x=[]$ and $y=[a,b,c,d]$:
1177 goal Prolog.thy "app(?x, ?y, a:b:c:d:Nil)";
1179 {\out app(?x, ?y, a : b : c : d : Nil)}
1180 {\out 1. app(?x, ?y, a : b : c : d : Nil)}
1182 by (REPEAT (resolve_tac [appNil,appCons] 1));
1184 {\out app(Nil, a : b : c : d : Nil, a : b : c : d : Nil)}
1187 Isabelle can lazily generate all the possibilities. The \ttindex{back}
1188 command returns the tactic's next outcome, namely $x=[a]$ and $y=[b,c,d]$:
1192 {\out app(a : Nil, b : c : d : Nil, a : b : c : d : Nil)}
1195 The other solutions are generated similarly.
1199 {\out app(a : b : Nil, c : d : Nil, a : b : c : d : Nil)}
1204 {\out app(a : b : c : Nil, d : Nil, a : b : c : d : Nil)}
1209 {\out app(a : b : c : d : Nil, Nil, a : b : c : d : Nil)}
1214 \subsection{Depth-first search}
1215 \index{search!depth-first}
1216 Now let us try $rev$, reversing a list.
1217 Bundle the rules together as the \ML{} identifier {\tt rules}. Naive
1218 reverse requires 120 inferences for this 14-element list, but the tactic
1219 terminates in a few seconds.
1221 goal Prolog.thy "rev(a:b:c:d:e:f:g:h:i:j:k:l:m:n:Nil, ?w)";
1223 {\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil, ?w)}
1224 {\out 1. rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil,}
1227 val rules = [appNil,appCons,revNil,revCons];
1229 by (REPEAT (resolve_tac rules 1));
1231 {\out rev(a : b : c : d : e : f : g : h : i : j : k : l : m : n : Nil,}
1232 {\out n : m : l : k : j : i : h : g : f : e : d : c : b : a : Nil)}
1235 We may execute $rev$ backwards. This, too, should reverse a list. What
1236 is the reverse of $[a,b,c]$?
1238 goal Prolog.thy "rev(?x, a:b:c:Nil)";
1240 {\out rev(?x, a : b : c : Nil)}
1241 {\out 1. rev(?x, a : b : c : Nil)}
1243 by (REPEAT (resolve_tac rules 1));
1245 {\out rev(?x1 : Nil, a : b : c : Nil)}
1246 {\out 1. app(Nil, ?x1 : Nil, a : b : c : Nil)}
1248 The tactic has failed to find a solution! It reached a dead end at
1249 subgoal~1: there is no~$\Var{x@1}$ such that [] appended with~$[\Var{x@1}]$
1250 equals~$[a,b,c]$. Backtracking explores other outcomes.
1254 {\out rev(?x1 : a : Nil, a : b : c : Nil)}
1255 {\out 1. app(Nil, ?x1 : Nil, b : c : Nil)}
1257 This too is a dead end, but the next outcome is successful.
1261 {\out rev(c : b : a : Nil, a : b : c : Nil)}
1264 \ttindex{REPEAT} goes wrong because it is only a repetition tactical, not a
1265 search tactical. {\tt REPEAT} stops when it cannot continue, regardless of
1266 which state is reached. The tactical \ttindex{DEPTH_FIRST} searches for a
1267 satisfactory state, as specified by an \ML{} predicate. Below,
1268 \ttindex{has_fewer_prems} specifies that the proof state should have no
1271 val prolog_tac = DEPTH_FIRST (has_fewer_prems 1)
1272 (resolve_tac rules 1);
1274 Since Prolog uses depth-first search, this tactic is a (slow!)
1275 Prolog interpreter. We return to the start of the proof using
1276 \ttindex{choplev}, and apply {\tt prolog_tac}:
1280 {\out rev(?x, a : b : c : Nil)}
1281 {\out 1. rev(?x, a : b : c : Nil)}
1283 by (DEPTH_FIRST (has_fewer_prems 1) (resolve_tac rules 1));
1285 {\out rev(c : b : a : Nil, a : b : c : Nil)}
1288 Let us try {\tt prolog_tac} on one more example, containing four unknowns:
1290 goal Prolog.thy "rev(a:?x:c:?y:Nil, d:?z:b:?u)";
1292 {\out rev(a : ?x : c : ?y : Nil, d : ?z : b : ?u)}
1293 {\out 1. rev(a : ?x : c : ?y : Nil, d : ?z : b : ?u)}
1297 {\out rev(a : b : c : d : Nil, d : c : b : a : Nil)}
1300 Although Isabelle is much slower than a Prolog system, Isabelle
1301 tactics can exploit logic programming techniques.