2 \chapter{The Classical Reasoner}\label{chap:classical}
3 \index{classical reasoner|(}
4 \newcommand\ainfer[2]{\begin{array}{r@{\,}l}#2\\ \hline#1\end{array}}
6 Although Isabelle is generic, many users will be working in some extension of
7 classical first-order logic. Isabelle's set theory~ZF is built upon
8 theory~FOL, while HOL conceptually contains first-order logic as a fragment.
9 Theorem-proving in predicate logic is undecidable, but many researchers have
10 developed strategies to assist in this task.
12 Isabelle's classical reasoner is an \ML{} functor that accepts certain
13 information about a logic and delivers a suite of automatic tactics. Each
14 tactic takes a collection of rules and executes a simple, non-clausal proof
15 procedure. They are slow and simplistic compared with resolution theorem
16 provers, but they can save considerable time and effort. They can prove
17 theorems such as Pelletier's~\cite{pelletier86} problems~40 and~41 in
19 \[ (\exists y. \forall x. J(y,x) \bimp \neg J(x,x))
20 \imp \neg (\forall x. \exists y. \forall z. J(z,y) \bimp \neg J(z,x)) \]
21 \[ (\forall z. \exists y. \forall x. F(x,y) \bimp F(x,z) \conj \neg F(x,x))
22 \imp \neg (\exists z. \forall x. F(x,z))
25 The tactics are generic. They are not restricted to first-order logic, and
26 have been heavily used in the development of Isabelle's set theory. Few
27 interactive proof assistants provide this much automation. The tactics can
28 be traced, and their components can be called directly; in this manner,
29 any proof can be viewed interactively.
31 We shall first discuss the underlying principles, then present the classical
32 reasoner. Finally, we shall see how to instantiate it for new logics. The
33 logics FOL, ZF, HOL and HOLCF have it already installed.
36 \section{The sequent calculus}
37 \index{sequent calculus}
38 Isabelle supports natural deduction, which is easy to use for interactive
39 proof. But natural deduction does not easily lend itself to automation,
40 and has a bias towards intuitionism. For certain proofs in classical
41 logic, it can not be called natural. The {\bf sequent calculus}, a
42 generalization of natural deduction, is easier to automate.
44 A {\bf sequent} has the form $\Gamma\turn\Delta$, where $\Gamma$
45 and~$\Delta$ are sets of formulae.%
46 \footnote{For first-order logic, sequents can equivalently be made from
47 lists or multisets of formulae.} The sequent
48 \[ P@1,\ldots,P@m\turn Q@1,\ldots,Q@n \]
49 is {\bf valid} if $P@1\conj\ldots\conj P@m$ implies $Q@1\disj\ldots\disj
50 Q@n$. Thus $P@1,\ldots,P@m$ represent assumptions, each of which is true,
51 while $Q@1,\ldots,Q@n$ represent alternative goals. A sequent is {\bf
52 basic} if its left and right sides have a common formula, as in $P,Q\turn
53 Q,R$; basic sequents are trivially valid.
55 Sequent rules are classified as {\bf right} or {\bf left}, indicating which
56 side of the $\turn$~symbol they operate on. Rules that operate on the
57 right side are analogous to natural deduction's introduction rules, and
58 left rules are analogous to elimination rules.
59 Recall the natural deduction rules for
61 \iflabelundefined{fol-fig}{from {\it Introduction to Isabelle}}%
62 {Fig.\ts\ref{fol-fig}}.
63 The sequent calculus analogue of~$({\imp}I)$ is the rule
65 \ainfer{\Gamma &\turn \Delta, P\imp Q}{P,\Gamma &\turn \Delta,Q}
68 This breaks down some implication on the right side of a sequent; $\Gamma$
69 and $\Delta$ stand for the sets of formulae that are unaffected by the
70 inference. The analogue of the pair~$({\disj}I1)$ and~$({\disj}I2)$ is the
73 \ainfer{\Gamma &\turn \Delta, P\disj Q}{\Gamma &\turn \Delta,P,Q}
76 This breaks down some disjunction on the right side, replacing it by both
77 disjuncts. Thus, the sequent calculus is a kind of multiple-conclusion logic.
79 To illustrate the use of multiple formulae on the right, let us prove
80 the classical theorem $(P\imp Q)\disj(Q\imp P)$. Working backwards, we
81 reduce this formula to a basic sequent:
82 \[ \infer[(\disj)R]{\turn(P\imp Q)\disj(Q\imp P)}
83 {\infer[(\imp)R]{\turn(P\imp Q), (Q\imp P)\;}
84 {\infer[(\imp)R]{P \turn Q, (Q\imp P)\qquad}
85 {P, Q \turn Q, P\qquad\qquad}}}
87 This example is typical of the sequent calculus: start with the desired
88 theorem and apply rules backwards in a fairly arbitrary manner. This yields a
89 surprisingly effective proof procedure. Quantifiers add few complications,
90 since Isabelle handles parameters and schematic variables. See Chapter~10
91 of {\em ML for the Working Programmer}~\cite{paulson-ml2} for further
95 \section{Simulating sequents by natural deduction}
96 Isabelle can represent sequents directly, as in the object-logic~\texttt{LK}\@.
97 But natural deduction is easier to work with, and most object-logics employ
98 it. Fortunately, we can simulate the sequent $P@1,\ldots,P@m\turn
99 Q@1,\ldots,Q@n$ by the Isabelle formula
100 \[ \List{P@1;\ldots;P@m; \neg Q@2;\ldots; \neg Q@n}\Imp Q@1, \]
101 where the order of the assumptions and the choice of~$Q@1$ are arbitrary.
102 Elim-resolution plays a key role in simulating sequent proofs.
104 We can easily handle reasoning on the left.
106 \iflabelundefined{destruct}{{\it Introduction to Isabelle}}{{\S}\ref{destruct}},
107 elim-resolution with the rules $(\disj E)$, $(\bot E)$ and $(\exists E)$
108 achieves a similar effect as the corresponding sequent rules. For the
109 other connectives, we use sequent-style elimination rules instead of
110 destruction rules such as $({\conj}E1,2)$ and $(\forall E)$. But note that
111 the rule $(\neg L)$ has no effect under our representation of sequents!
113 \ainfer{\neg P,\Gamma &\turn \Delta}{\Gamma &\turn \Delta,P}\eqno({\neg}L)
115 What about reasoning on the right? Introduction rules can only affect the
116 formula in the conclusion, namely~$Q@1$. The other right-side formulae are
117 represented as negated assumptions, $\neg Q@2$, \ldots,~$\neg Q@n$.
118 \index{assumptions!negated}
119 In order to operate on one of these, it must first be exchanged with~$Q@1$.
120 Elim-resolution with the {\bf swap} rule has this effect:
121 $$ \List{\neg P; \; \neg R\Imp P} \Imp R \eqno(swap) $$
122 To ensure that swaps occur only when necessary, each introduction rule is
123 converted into a swapped form: it is resolved with the second premise
124 of~$(swap)$. The swapped form of~$({\conj}I)$, which might be
125 called~$({\neg\conj}E)$, is
126 \[ \List{\neg(P\conj Q); \; \neg R\Imp P; \; \neg R\Imp Q} \Imp R. \]
127 Similarly, the swapped form of~$({\imp}I)$ is
128 \[ \List{\neg(P\imp Q); \; \List{\neg R;P}\Imp Q} \Imp R \]
129 Swapped introduction rules are applied using elim-resolution, which deletes
130 the negated formula. Our representation of sequents also requires the use
131 of ordinary introduction rules. If we had no regard for readability, we
132 could treat the right side more uniformly by representing sequents as
133 \[ \List{P@1;\ldots;P@m; \neg Q@1;\ldots; \neg Q@n}\Imp \bot. \]
136 \section{Extra rules for the sequent calculus}
137 As mentioned, destruction rules such as $({\conj}E1,2)$ and $(\forall E)$
138 must be replaced by sequent-style elimination rules. In addition, we need
139 rules to embody the classical equivalence between $P\imp Q$ and $\neg P\disj
140 Q$. The introduction rules~$({\disj}I1,2)$ are replaced by a rule that
141 simulates $({\disj}R)$:
142 \[ (\neg Q\Imp P) \Imp P\disj Q \]
143 The destruction rule $({\imp}E)$ is replaced by
144 \[ \List{P\imp Q;\; \neg P\Imp R;\; Q\Imp R} \Imp R. \]
145 Quantifier replication also requires special rules. In classical logic,
146 $\exists x{.}P$ is equivalent to $\neg\forall x{.}\neg P$; the rules
147 $(\exists R)$ and $(\forall L)$ are dual:
148 \[ \ainfer{\Gamma &\turn \Delta, \exists x{.}P}
149 {\Gamma &\turn \Delta, \exists x{.}P, P[t/x]} \; (\exists R)
151 \ainfer{\forall x{.}P, \Gamma &\turn \Delta}
152 {P[t/x], \forall x{.}P, \Gamma &\turn \Delta} \; (\forall L)
154 Thus both kinds of quantifier may be replicated. Theorems requiring
155 multiple uses of a universal formula are easy to invent; consider
156 \[ (\forall x.P(x)\imp P(f(x))) \conj P(a) \imp P(f^n(a)), \]
157 for any~$n>1$. Natural examples of the multiple use of an existential
158 formula are rare; a standard one is $\exists x.\forall y. P(x)\imp P(y)$.
160 Forgoing quantifier replication loses completeness, but gains decidability,
161 since the search space becomes finite. Many useful theorems can be proved
162 without replication, and the search generally delivers its verdict in a
163 reasonable time. To adopt this approach, represent the sequent rules
164 $(\exists R)$, $(\exists L)$ and $(\forall R)$ by $(\exists I)$, $(\exists
165 E)$ and $(\forall I)$, respectively, and put $(\forall E)$ into elimination
167 $$ \List{\forall x{.}P(x); P(t)\Imp Q} \Imp Q \eqno(\forall E@2) $$
168 Elim-resolution with this rule will delete the universal formula after a
169 single use. To replicate universal quantifiers, replace the rule by
171 \List{\forall x{.}P(x);\; \List{P(t); \forall x{.}P(x)}\Imp Q} \Imp Q.
174 To replicate existential quantifiers, replace $(\exists I)$ by
175 \[ \List{\neg(\exists x{.}P(x)) \Imp P(t)} \Imp \exists x{.}P(x). \]
176 All introduction rules mentioned above are also useful in swapped form.
178 Replication makes the search space infinite; we must apply the rules with
179 care. The classical reasoner distinguishes between safe and unsafe
180 rules, applying the latter only when there is no alternative. Depth-first
181 search may well go down a blind alley; best-first search is better behaved
182 in an infinite search space. However, quantifier replication is too
183 expensive to prove any but the simplest theorems.
186 \section{Classical rule sets}
187 \index{classical sets}
188 Each automatic tactic takes a {\bf classical set} --- a collection of
189 rules, classified as introduction or elimination and as {\bf safe} or {\bf
190 unsafe}. In general, safe rules can be attempted blindly, while unsafe
191 rules must be used with care. A safe rule must never reduce a provable
192 goal to an unprovable set of subgoals.
194 The rule~$({\disj}I1)$ is unsafe because it reduces $P\disj Q$ to~$P$. Any
195 rule is unsafe whose premises contain new unknowns. The elimination
196 rule~$(\forall E@2)$ is unsafe, since it is applied via elim-resolution,
197 which discards the assumption $\forall x{.}P(x)$ and replaces it by the
198 weaker assumption~$P(\Var{t})$. The rule $({\exists}I)$ is unsafe for
199 similar reasons. The rule~$(\forall E@3)$ is unsafe in a different sense:
200 since it keeps the assumption $\forall x{.}P(x)$, it is prone to looping.
201 In classical first-order logic, all rules are safe except those mentioned
204 The safe/unsafe distinction is vague, and may be regarded merely as a way
205 of giving some rules priority over others. One could argue that
206 $({\disj}E)$ is unsafe, because repeated application of it could generate
207 exponentially many subgoals. Induction rules are unsafe because inductive
208 proofs are difficult to set up automatically. Any inference is unsafe that
209 instantiates an unknown in the proof state --- thus \ttindex{match_tac}
210 must be used, rather than \ttindex{resolve_tac}. Even proof by assumption
211 is unsafe if it instantiates unknowns shared with other subgoals --- thus
212 \ttindex{eq_assume_tac} must be used, rather than \ttindex{assume_tac}.
214 \subsection{Adding rules to classical sets}
215 Classical rule sets belong to the abstract type \mltydx{claset}, which
216 supports the following operations (provided the classical reasoner is
220 print_cs : claset -> unit
221 rep_cs : claset -> \{safeEs: thm list, safeIs: thm list,
222 hazEs: thm list, hazIs: thm list,
223 swrappers: (string * wrapper) list,
224 uwrappers: (string * wrapper) list,
225 safe0_netpair: netpair, safep_netpair: netpair,
226 haz_netpair: netpair, dup_netpair: netpair\}
227 addSIs : claset * thm list -> claset \hfill{\bf infix 4}
228 addSEs : claset * thm list -> claset \hfill{\bf infix 4}
229 addSDs : claset * thm list -> claset \hfill{\bf infix 4}
230 addIs : claset * thm list -> claset \hfill{\bf infix 4}
231 addEs : claset * thm list -> claset \hfill{\bf infix 4}
232 addDs : claset * thm list -> claset \hfill{\bf infix 4}
233 delrules : claset * thm list -> claset \hfill{\bf infix 4}
235 The add operations ignore any rule already present in the claset with the same
236 classification (such as safe introduction). They print a warning if the rule
237 has already been added with some other classification, but add the rule
238 anyway. Calling \texttt{delrules} deletes all occurrences of a rule from the
239 claset, but see the warning below concerning destruction rules.
240 \begin{ttdescription}
241 \item[\ttindexbold{empty_cs}] is the empty classical set.
243 \item[\ttindexbold{print_cs} $cs$] displays the printable contents of~$cs$,
244 which is the rules. All other parts are non-printable.
246 \item[\ttindexbold{rep_cs} $cs$] decomposes $cs$ as a record of its internal
247 components, namely the safe introduction and elimination rules, the unsafe
248 introduction and elimination rules, the lists of safe and unsafe wrappers
249 (see \ref{sec:modifying-search}), and the internalized forms of the rules.
251 \item[$cs$ addSIs $rules$] \indexbold{*addSIs}
252 adds safe introduction~$rules$ to~$cs$.
254 \item[$cs$ addSEs $rules$] \indexbold{*addSEs}
255 adds safe elimination~$rules$ to~$cs$.
257 \item[$cs$ addSDs $rules$] \indexbold{*addSDs}
258 adds safe destruction~$rules$ to~$cs$.
260 \item[$cs$ addIs $rules$] \indexbold{*addIs}
261 adds unsafe introduction~$rules$ to~$cs$.
263 \item[$cs$ addEs $rules$] \indexbold{*addEs}
264 adds unsafe elimination~$rules$ to~$cs$.
266 \item[$cs$ addDs $rules$] \indexbold{*addDs}
267 adds unsafe destruction~$rules$ to~$cs$.
269 \item[$cs$ delrules $rules$] \indexbold{*delrules}
270 deletes~$rules$ from~$cs$. It prints a warning for those rules that are not
275 If you added $rule$ using \texttt{addSDs} or \texttt{addDs}, then you must delete
278 \(cs\) delrules [make_elim \(rule\)]
281 This is necessary because the operators \texttt{addSDs} and \texttt{addDs} convert
282 the destruction rules to elimination rules by applying \ttindex{make_elim},
283 and then insert them using \texttt{addSEs} and \texttt{addEs}, respectively.
286 Introduction rules are those that can be applied using ordinary resolution.
287 The classical set automatically generates their swapped forms, which will
288 be applied using elim-resolution. Elimination rules are applied using
289 elim-resolution. In a classical set, rules are sorted by the number of new
290 subgoals they will yield; rules that generate the fewest subgoals will be
291 tried first (see {\S}\ref{biresolve_tac}).
293 For elimination and destruction rules there are variants of the add operations
294 adding a rule in a way such that it is applied only if also its second premise
295 can be unified with an assumption of the current proof state:
296 \indexbold{*addSE2}\indexbold{*addSD2}\indexbold{*addE2}\indexbold{*addD2}
298 addSE2 : claset * (string * thm) -> claset \hfill{\bf infix 4}
299 addSD2 : claset * (string * thm) -> claset \hfill{\bf infix 4}
300 addE2 : claset * (string * thm) -> claset \hfill{\bf infix 4}
301 addD2 : claset * (string * thm) -> claset \hfill{\bf infix 4}
304 A rule to be added in this special way must be given a name, which is used
305 to delete it again -- when desired -- using \texttt{delSWrappers} or
306 \texttt{delWrappers}, respectively. This is because these add operations
307 are implemented as wrappers (see \ref{sec:modifying-search} below).
311 \subsection{Modifying the search step}
312 \label{sec:modifying-search}
313 For a given classical set, the proof strategy is simple. Perform as many safe
314 inferences as possible; or else, apply certain safe rules, allowing
315 instantiation of unknowns; or else, apply an unsafe rule. The tactics also
316 eliminate assumptions of the form $x=t$ by substitution if they have been set
317 up to do so (see \texttt{hyp_subst_tacs} in~{\S}\ref{sec:classical-setup} below).
318 They may perform a form of Modus Ponens: if there are assumptions $P\imp Q$
319 and~$P$, then replace $P\imp Q$ by~$Q$.
321 The classical reasoning tactics --- except \texttt{blast_tac}! --- allow
322 you to modify this basic proof strategy by applying two lists of arbitrary
323 {\bf wrapper tacticals} to it.
324 The first wrapper list, which is considered to contain safe wrappers only,
325 affects \ttindex{safe_step_tac} and all the tactics that call it.
326 The second one, which may contain unsafe wrappers, affects the unsafe parts
327 of \ttindex{step_tac}, \ttindex{slow_step_tac}, and the tactics that call them.
328 A wrapper transforms each step of the search, for example
329 by attempting other tactics before or after the original step tactic.
330 All members of a wrapper list are applied in turn to the respective step tactic.
332 Initially the two wrapper lists are empty, which means no modification of the
333 step tactics. Safe and unsafe wrappers are added to a claset
334 with the functions given below, supplying them with wrapper names.
335 These names may be used to selectively delete wrappers.
338 type wrapper = (int -> tactic) -> (int -> tactic);
340 addSWrapper : claset * (string * wrapper ) -> claset \hfill{\bf infix 4}
341 addSbefore : claset * (string * (int -> tactic)) -> claset \hfill{\bf infix 4}
342 addSafter : claset * (string * (int -> tactic)) -> claset \hfill{\bf infix 4}
343 delSWrapper : claset * string -> claset \hfill{\bf infix 4}
345 addWrapper : claset * (string * wrapper ) -> claset \hfill{\bf infix 4}
346 addbefore : claset * (string * (int -> tactic)) -> claset \hfill{\bf infix 4}
347 addafter : claset * (string * (int -> tactic)) -> claset \hfill{\bf infix 4}
348 delWrapper : claset * string -> claset \hfill{\bf infix 4}
350 addSss : claset * simpset -> claset \hfill{\bf infix 4}
351 addss : claset * simpset -> claset \hfill{\bf infix 4}
355 \begin{ttdescription}
356 \item[$cs$ addSWrapper $(name,wrapper)$] \indexbold{*addSWrapper}
357 adds a new wrapper, which should yield a safe tactic,
358 to modify the existing safe step tactic.
360 \item[$cs$ addSbefore $(name,tac)$] \indexbold{*addSbefore}
361 adds the given tactic as a safe wrapper, such that it is tried
362 {\em before} each safe step of the search.
364 \item[$cs$ addSafter $(name,tac)$] \indexbold{*addSafter}
365 adds the given tactic as a safe wrapper, such that it is tried
366 when a safe step of the search would fail.
368 \item[$cs$ delSWrapper $name$] \indexbold{*delSWrapper}
369 deletes the safe wrapper with the given name.
371 \item[$cs$ addWrapper $(name,wrapper)$] \indexbold{*addWrapper}
372 adds a new wrapper to modify the existing (unsafe) step tactic.
374 \item[$cs$ addbefore $(name,tac)$] \indexbold{*addbefore}
375 adds the given tactic as an unsafe wrapper, such that it its result is
376 concatenated {\em before} the result of each unsafe step.
378 \item[$cs$ addafter $(name,tac)$] \indexbold{*addafter}
379 adds the given tactic as an unsafe wrapper, such that it its result is
380 concatenated {\em after} the result of each unsafe step.
382 \item[$cs$ delWrapper $name$] \indexbold{*delWrapper}
383 deletes the unsafe wrapper with the given name.
385 \item[$cs$ addSss $ss$] \indexbold{*addss}
386 adds the simpset~$ss$ to the classical set. The assumptions and goal will be
387 simplified, in a rather safe way, after each safe step of the search.
389 \item[$cs$ addss $ss$] \indexbold{*addss}
390 adds the simpset~$ss$ to the classical set. The assumptions and goal will be
391 simplified, before the each unsafe step of the search.
395 \index{simplification!from classical reasoner}
396 Strictly speaking, the operators \texttt{addss} and \texttt{addSss}
397 are not part of the classical reasoner.
398 , which are used as primitives
399 for the automatic tactics described in {\S}\ref{sec:automatic-tactics}, are
400 implemented as wrapper tacticals.
403 Being defined as wrappers, these operators are inappropriate for adding more
404 than one simpset at a time: the simpset added last overwrites any earlier ones.
405 When a simpset combined with a claset is to be augmented, this should done
406 {\em before} combining it with the claset.
410 \section{The classical tactics}
411 \index{classical reasoner!tactics} If installed, the classical module provides
412 powerful theorem-proving tactics.
415 \subsection{The tableau prover}
416 The tactic \texttt{blast_tac} searches for a proof using a fast tableau prover,
417 coded directly in \ML. It then reconstructs the proof using Isabelle
418 tactics. It is faster and more powerful than the other classical
419 reasoning tactics, but has major limitations too.
421 \item It does not use the wrapper tacticals described above, such as
423 \item It does not perform higher-order unification, as needed by the rule {\tt
424 rangeI} in HOL and \texttt{RepFunI} in ZF. There are often alternatives
425 to such rules, for example {\tt range_eqI} and \texttt{RepFun_eqI}.
426 \item Function variables may only be applied to parameters of the subgoal.
427 (This restriction arises because the prover does not use higher-order
428 unification.) If other function variables are present then the prover will
429 fail with the message {\small\tt Function Var's argument not a bound variable}.
430 \item Its proof strategy is more general than \texttt{fast_tac}'s but can be
431 slower. If \texttt{blast_tac} fails or seems to be running forever, try {\tt
432 fast_tac} and the other tactics described below.
436 blast_tac : claset -> int -> tactic
437 Blast.depth_tac : claset -> int -> int -> tactic
438 Blast.trace : bool ref \hfill{\bf initially false}
440 The two tactics differ on how they bound the number of unsafe steps used in a
441 proof. While \texttt{blast_tac} starts with a bound of zero and increases it
442 successively to~20, \texttt{Blast.depth_tac} applies a user-supplied search bound.
443 \begin{ttdescription}
444 \item[\ttindexbold{blast_tac} $cs$ $i$] tries to prove
445 subgoal~$i$, increasing the search bound using iterative
446 deepening~\cite{korf85}.
448 \item[\ttindexbold{Blast.depth_tac} $cs$ $lim$ $i$] tries
449 to prove subgoal~$i$ using a search bound of $lim$. Sometimes a slow
450 proof using \texttt{blast_tac} can be made much faster by supplying the
451 successful search bound to this tactic instead.
453 \item[set \ttindexbold{Blast.trace};] \index{tracing!of classical prover}
454 causes the tableau prover to print a trace of its search. At each step it
455 displays the formula currently being examined and reports whether the branch
456 has been closed, extended or split.
460 \subsection{Automatic tactics}\label{sec:automatic-tactics}
462 type clasimpset = claset * simpset;
463 auto_tac : clasimpset -> tactic
464 force_tac : clasimpset -> int -> tactic
468 The automatic tactics attempt to prove goals using a combination of
469 simplification and classical reasoning.
470 \begin{ttdescription}
471 \item[\ttindexbold{auto_tac $(cs,ss)$}] is intended for situations where
472 there are a lot of mostly trivial subgoals; it proves all the easy ones,
473 leaving the ones it cannot prove.
474 (Unfortunately, attempting to prove the hard ones may take a long time.)
475 \item[\ttindexbold{force_tac} $(cs,ss)$ $i$] is intended to prove subgoal~$i$
476 completely. It tries to apply all fancy tactics it knows about,
477 performing a rather exhaustive search.
481 \subsection{Semi-automatic tactics}
483 clarify_tac : claset -> int -> tactic
484 clarify_step_tac : claset -> int -> tactic
485 clarsimp_tac : clasimpset -> int -> tactic
487 Use these when the automatic tactics fail. They perform all the obvious
488 logical inferences that do not split the subgoal. The result is a
489 simpler subgoal that can be tackled by other means, such as by
490 instantiating quantifiers yourself.
491 \begin{ttdescription}
492 \item[\ttindexbold{clarify_tac} $cs$ $i$] performs a series of safe steps on
493 subgoal~$i$ by repeatedly calling \texttt{clarify_step_tac}.
494 \item[\ttindexbold{clarify_step_tac} $cs$ $i$] performs a safe step on
495 subgoal~$i$. No splitting step is applied; for example, the subgoal $A\conj
496 B$ is left as a conjunction. Proof by assumption, Modus Ponens, etc., may be
497 performed provided they do not instantiate unknowns. Assumptions of the
498 form $x=t$ may be eliminated. The user-supplied safe wrapper tactical is
500 \item[\ttindexbold{clarsimp_tac} $cs$ $i$] acts like \texttt{clarify_tac}, but
501 also does simplification with the given simpset. Note that if the simpset
502 includes a splitter for the premises, the subgoal may still be split.
506 \subsection{Other classical tactics}
508 fast_tac : claset -> int -> tactic
509 best_tac : claset -> int -> tactic
510 slow_tac : claset -> int -> tactic
511 slow_best_tac : claset -> int -> tactic
513 These tactics attempt to prove a subgoal using sequent-style reasoning.
514 Unlike \texttt{blast_tac}, they construct proofs directly in Isabelle. Their
515 effect is restricted (by \texttt{SELECT_GOAL}) to one subgoal; they either prove
516 this subgoal or fail. The \texttt{slow_} versions conduct a broader
518 \footnote{They may, when backtracking from a failed proof attempt, undo even
519 the step of proving a subgoal by assumption.}
521 The best-first tactics are guided by a heuristic function: typically, the
522 total size of the proof state. This function is supplied in the functor call
523 that sets up the classical reasoner.
524 \begin{ttdescription}
525 \item[\ttindexbold{fast_tac} $cs$ $i$] applies \texttt{step_tac} using
526 depth-first search to prove subgoal~$i$.
528 \item[\ttindexbold{best_tac} $cs$ $i$] applies \texttt{step_tac} using
529 best-first search to prove subgoal~$i$.
531 \item[\ttindexbold{slow_tac} $cs$ $i$] applies \texttt{slow_step_tac} using
532 depth-first search to prove subgoal~$i$.
534 \item[\ttindexbold{slow_best_tac} $cs$ $i$] applies \texttt{slow_step_tac} with
535 best-first search to prove subgoal~$i$.
539 \subsection{Depth-limited automatic tactics}
541 depth_tac : claset -> int -> int -> tactic
542 deepen_tac : claset -> int -> int -> tactic
544 These work by exhaustive search up to a specified depth. Unsafe rules are
545 modified to preserve the formula they act on, so that it be used repeatedly.
546 They can prove more goals than \texttt{fast_tac} can but are much
547 slower, for example if the assumptions have many universal quantifiers.
549 The depth limits the number of unsafe steps. If you can estimate the minimum
550 number of unsafe steps needed, supply this value as~$m$ to save time.
551 \begin{ttdescription}
552 \item[\ttindexbold{depth_tac} $cs$ $m$ $i$]
553 tries to prove subgoal~$i$ by exhaustive search up to depth~$m$.
555 \item[\ttindexbold{deepen_tac} $cs$ $m$ $i$]
556 tries to prove subgoal~$i$ by iterative deepening. It calls \texttt{depth_tac}
557 repeatedly with increasing depths, starting with~$m$.
561 \subsection{Single-step tactics}
563 safe_step_tac : claset -> int -> tactic
564 safe_tac : claset -> tactic
565 inst_step_tac : claset -> int -> tactic
566 step_tac : claset -> int -> tactic
567 slow_step_tac : claset -> int -> tactic
569 The automatic proof procedures call these tactics. By calling them
570 yourself, you can execute these procedures one step at a time.
571 \begin{ttdescription}
572 \item[\ttindexbold{safe_step_tac} $cs$ $i$] performs a safe step on
573 subgoal~$i$. The safe wrapper tacticals are applied to a tactic that may
574 include proof by assumption or Modus Ponens (taking care not to instantiate
575 unknowns), or substitution.
577 \item[\ttindexbold{safe_tac} $cs$] repeatedly performs safe steps on all
578 subgoals. It is deterministic, with at most one outcome.
580 \item[\ttindexbold{inst_step_tac} $cs$ $i$] is like \texttt{safe_step_tac},
581 but allows unknowns to be instantiated.
583 \item[\ttindexbold{step_tac} $cs$ $i$] is the basic step of the proof
584 procedure. The unsafe wrapper tacticals are applied to a tactic that tries
585 \texttt{safe_tac}, \texttt{inst_step_tac}, or applies an unsafe rule
588 \item[\ttindexbold{slow_step_tac}]
589 resembles \texttt{step_tac}, but allows backtracking between using safe
590 rules with instantiation (\texttt{inst_step_tac}) and using unsafe rules.
591 The resulting search space is larger.
595 \subsection{Other useful tactics}
596 \index{tactics!for contradiction}
597 \index{tactics!for Modus Ponens}
599 contr_tac : int -> tactic
600 mp_tac : int -> tactic
601 eq_mp_tac : int -> tactic
602 swap_res_tac : thm list -> int -> tactic
604 These can be used in the body of a specialized search.
605 \begin{ttdescription}
606 \item[\ttindexbold{contr_tac} {\it i}]\index{assumptions!contradictory}
607 solves subgoal~$i$ by detecting a contradiction among two assumptions of
608 the form $P$ and~$\neg P$, or fail. It may instantiate unknowns. The
609 tactic can produce multiple outcomes, enumerating all possible
612 \item[\ttindexbold{mp_tac} {\it i}]
613 is like \texttt{contr_tac}, but also attempts to perform Modus Ponens in
614 subgoal~$i$. If there are assumptions $P\imp Q$ and~$P$, then it replaces
615 $P\imp Q$ by~$Q$. It may instantiate unknowns. It fails if it can do
618 \item[\ttindexbold{eq_mp_tac} {\it i}]
619 is like \texttt{mp_tac} {\it i}, but may not instantiate unknowns --- thus, it
622 \item[\ttindexbold{swap_res_tac} {\it thms} {\it i}] refines subgoal~$i$ of
623 the proof state using {\it thms}, which should be a list of introduction
624 rules. First, it attempts to prove the goal using \texttt{assume_tac} or
625 \texttt{contr_tac}. It then attempts to apply each rule in turn, attempting
626 resolution and also elim-resolution with the swapped form.
629 \subsection{Creating swapped rules}
631 swapify : thm list -> thm list
632 joinrules : thm list * thm list -> (bool * thm) list
634 \begin{ttdescription}
635 \item[\ttindexbold{swapify} {\it thms}] returns a list consisting of the
636 swapped versions of~{\it thms}, regarded as introduction rules.
638 \item[\ttindexbold{joinrules} ({\it intrs}, {\it elims})]
639 joins introduction rules, their swapped versions, and elimination rules for
640 use with \ttindex{biresolve_tac}. Each rule is paired with~\texttt{false}
641 (indicating ordinary resolution) or~\texttt{true} (indicating
646 \section{Setting up the classical reasoner}\label{sec:classical-setup}
647 \index{classical reasoner!setting up}
648 Isabelle's classical object-logics, including \texttt{FOL} and \texttt{HOL},
649 have the classical reasoner already set up.
650 When defining a new classical logic, you should set up the reasoner yourself.
651 It consists of the \ML{} functor \ttindex{ClassicalFun}, which takes the
652 argument signature \texttt{CLASSICAL_DATA}:
654 signature CLASSICAL_DATA =
659 val sizef : thm -> int
660 val hyp_subst_tacs : (int -> tactic) list
663 Thus, the functor requires the following items:
664 \begin{ttdescription}
665 \item[\tdxbold{mp}] should be the Modus Ponens rule
666 $\List{\Var{P}\imp\Var{Q};\; \Var{P}} \Imp \Var{Q}$.
668 \item[\tdxbold{not_elim}] should be the contradiction rule
669 $\List{\neg\Var{P};\; \Var{P}} \Imp \Var{R}$.
671 \item[\tdxbold{swap}] should be the swap rule
672 $\List{\neg \Var{P}; \; \neg \Var{R}\Imp \Var{P}} \Imp \Var{R}$.
674 \item[\ttindexbold{sizef}] is the heuristic function used for best-first
675 search. It should estimate the size of the remaining subgoals. A good
676 heuristic function is \ttindex{size_of_thm}, which measures the size of the
677 proof state. Another size function might ignore certain subgoals (say,
678 those concerned with type-checking). A heuristic function might simply
681 \item[\ttindexbold{hyp_subst_tacs}] is a list of tactics for substitution in
682 the hypotheses, typically created by \ttindex{HypsubstFun} (see
683 Chapter~\ref{substitution}). This list can, of course, be empty. The
684 tactics are assumed to be safe!
686 The functor is not at all sensitive to the formalization of the
687 object-logic. It does not even examine the rules, but merely applies
688 them according to its fixed strategy. The functor resides in {\tt
689 Provers/classical.ML} in the Isabelle sources.
691 \index{classical reasoner|)}
695 %%% TeX-master: "ref"