doc-src/Ref/classical.tex
author oheimb
Sat, 15 Feb 1997 17:35:53 +0100
changeset 2632 1612b99395d4
parent 2631 5e5c78ba955e
child 3089 32dad29d4666
permissions -rw-r--r--
corrected minor mistakes
lcp@104
     1
%% $Id$
lcp@319
     2
\chapter{The Classical Reasoner}\label{chap:classical}
lcp@286
     3
\index{classical reasoner|(}
lcp@308
     4
\newcommand\ainfer[2]{\begin{array}{r@{\,}l}#2\\ \hline#1\end{array}}
lcp@308
     5
lcp@104
     6
Although Isabelle is generic, many users will be working in some extension
lcp@308
     7
of classical first-order logic.  Isabelle's set theory~{\tt ZF} is built
lcp@308
     8
upon theory~{\tt FOL}, while higher-order logic contains first-order logic
lcp@308
     9
as a fragment.  Theorem-proving in predicate logic is undecidable, but many
lcp@308
    10
researchers have developed strategies to assist in this task.
lcp@104
    11
lcp@286
    12
Isabelle's classical reasoner is an \ML{} functor that accepts certain
lcp@104
    13
information about a logic and delivers a suite of automatic tactics.  Each
lcp@104
    14
tactic takes a collection of rules and executes a simple, non-clausal proof
lcp@104
    15
procedure.  They are slow and simplistic compared with resolution theorem
lcp@104
    16
provers, but they can save considerable time and effort.  They can prove
lcp@104
    17
theorems such as Pelletier's~\cite{pelletier86} problems~40 and~41 in
lcp@104
    18
seconds:
lcp@104
    19
\[ (\exists y. \forall x. J(y,x) \bimp \neg J(x,x))  
lcp@104
    20
   \imp  \neg (\forall x. \exists y. \forall z. J(z,y) \bimp \neg J(z,x)) \]
lcp@104
    21
\[ (\forall z. \exists y. \forall x. F(x,y) \bimp F(x,z) \conj \neg F(x,x))
lcp@104
    22
   \imp \neg (\exists z. \forall x. F(x,z))  
lcp@104
    23
\]
lcp@308
    24
%
lcp@308
    25
The tactics are generic.  They are not restricted to first-order logic, and
lcp@308
    26
have been heavily used in the development of Isabelle's set theory.  Few
lcp@308
    27
interactive proof assistants provide this much automation.  The tactics can
lcp@308
    28
be traced, and their components can be called directly; in this manner,
lcp@308
    29
any proof can be viewed interactively.
lcp@104
    30
paulson@2479
    31
The simplest way to apply the classical reasoner (to subgoal~$i$) is as
paulson@2479
    32
follows:
paulson@2479
    33
\begin{ttbox}
paulson@2479
    34
by (Fast_tac \(i\));
paulson@2479
    35
\end{ttbox}
paulson@2479
    36
If the subgoal is a simple formula of the predicate calculus or set theory,
paulson@2479
    37
then it should be proved quickly.  However, to use the classical reasoner
paulson@2479
    38
effectively, you need to know how it works.
paulson@2479
    39
paulson@2479
    40
We shall first discuss the underlying principles, then present the classical
paulson@2479
    41
reasoner.  Finally, we shall see how to instantiate it for new logics.  The
paulson@2479
    42
logics {\tt FOL}, {\tt HOL} and {\tt ZF} have it already installed.
lcp@104
    43
lcp@104
    44
lcp@104
    45
\section{The sequent calculus}
lcp@104
    46
\index{sequent calculus}
lcp@104
    47
Isabelle supports natural deduction, which is easy to use for interactive
lcp@104
    48
proof.  But natural deduction does not easily lend itself to automation,
lcp@104
    49
and has a bias towards intuitionism.  For certain proofs in classical
lcp@104
    50
logic, it can not be called natural.  The {\bf sequent calculus}, a
lcp@104
    51
generalization of natural deduction, is easier to automate.
lcp@104
    52
lcp@104
    53
A {\bf sequent} has the form $\Gamma\turn\Delta$, where $\Gamma$
lcp@308
    54
and~$\Delta$ are sets of formulae.%
lcp@308
    55
\footnote{For first-order logic, sequents can equivalently be made from
lcp@308
    56
  lists or multisets of formulae.} The sequent
lcp@104
    57
\[ P@1,\ldots,P@m\turn Q@1,\ldots,Q@n \]
lcp@104
    58
is {\bf valid} if $P@1\conj\ldots\conj P@m$ implies $Q@1\disj\ldots\disj
lcp@104
    59
Q@n$.  Thus $P@1,\ldots,P@m$ represent assumptions, each of which is true,
lcp@104
    60
while $Q@1,\ldots,Q@n$ represent alternative goals.  A sequent is {\bf
lcp@104
    61
basic} if its left and right sides have a common formula, as in $P,Q\turn
lcp@104
    62
Q,R$; basic sequents are trivially valid.
lcp@104
    63
lcp@104
    64
Sequent rules are classified as {\bf right} or {\bf left}, indicating which
lcp@104
    65
side of the $\turn$~symbol they operate on.  Rules that operate on the
lcp@104
    66
right side are analogous to natural deduction's introduction rules, and
lcp@308
    67
left rules are analogous to elimination rules.  
lcp@308
    68
Recall the natural deduction rules for
lcp@308
    69
  first-order logic, 
lcp@308
    70
\iflabelundefined{fol-fig}{from {\it Introduction to Isabelle}}%
lcp@308
    71
                          {Fig.\ts\ref{fol-fig}}.
lcp@308
    72
The sequent calculus analogue of~$({\imp}I)$ is the rule
lcp@104
    73
$$ \ainfer{\Gamma &\turn \Delta, P\imp Q}{P,\Gamma &\turn \Delta,Q}
lcp@104
    74
   \eqno({\imp}R) $$
lcp@104
    75
This breaks down some implication on the right side of a sequent; $\Gamma$
lcp@104
    76
and $\Delta$ stand for the sets of formulae that are unaffected by the
lcp@104
    77
inference.  The analogue of the pair~$({\disj}I1)$ and~$({\disj}I2)$ is the
lcp@104
    78
single rule 
lcp@104
    79
$$ \ainfer{\Gamma &\turn \Delta, P\disj Q}{\Gamma &\turn \Delta,P,Q}
lcp@104
    80
   \eqno({\disj}R) $$
lcp@104
    81
This breaks down some disjunction on the right side, replacing it by both
lcp@104
    82
disjuncts.  Thus, the sequent calculus is a kind of multiple-conclusion logic.
lcp@104
    83
lcp@104
    84
To illustrate the use of multiple formulae on the right, let us prove
lcp@104
    85
the classical theorem $(P\imp Q)\disj(Q\imp P)$.  Working backwards, we
lcp@104
    86
reduce this formula to a basic sequent:
lcp@104
    87
\[ \infer[(\disj)R]{\turn(P\imp Q)\disj(Q\imp P)}
lcp@104
    88
   {\infer[(\imp)R]{\turn(P\imp Q), (Q\imp P)\;}
lcp@104
    89
    {\infer[(\imp)R]{P \turn Q, (Q\imp P)\qquad}
lcp@104
    90
                    {P, Q \turn Q, P\qquad\qquad}}}
lcp@104
    91
\]
lcp@104
    92
This example is typical of the sequent calculus: start with the desired
lcp@104
    93
theorem and apply rules backwards in a fairly arbitrary manner.  This yields a
lcp@104
    94
surprisingly effective proof procedure.  Quantifiers add few complications,
lcp@104
    95
since Isabelle handles parameters and schematic variables.  See Chapter~10
lcp@104
    96
of {\em ML for the Working Programmer}~\cite{paulson91} for further
lcp@104
    97
discussion.
lcp@104
    98
lcp@104
    99
lcp@104
   100
\section{Simulating sequents by natural deduction}
lcp@308
   101
Isabelle can represent sequents directly, as in the object-logic~{\tt LK}\@.
lcp@104
   102
But natural deduction is easier to work with, and most object-logics employ
lcp@104
   103
it.  Fortunately, we can simulate the sequent $P@1,\ldots,P@m\turn
lcp@104
   104
Q@1,\ldots,Q@n$ by the Isabelle formula
lcp@104
   105
\[ \List{P@1;\ldots;P@m; \neg Q@2;\ldots; \neg Q@n}\Imp Q@1, \]
lcp@104
   106
where the order of the assumptions and the choice of~$Q@1$ are arbitrary.
lcp@104
   107
Elim-resolution plays a key role in simulating sequent proofs.
lcp@104
   108
lcp@104
   109
We can easily handle reasoning on the left.
lcp@308
   110
As discussed in
lcp@308
   111
\iflabelundefined{destruct}{{\it Introduction to Isabelle}}{\S\ref{destruct}}, 
lcp@104
   112
elim-resolution with the rules $(\disj E)$, $(\bot E)$ and $(\exists E)$
lcp@104
   113
achieves a similar effect as the corresponding sequent rules.  For the
lcp@104
   114
other connectives, we use sequent-style elimination rules instead of
lcp@308
   115
destruction rules such as $({\conj}E1,2)$ and $(\forall E)$.  But note that
lcp@308
   116
the rule $(\neg L)$ has no effect under our representation of sequents!
lcp@104
   117
$$ \ainfer{\neg P,\Gamma &\turn \Delta}{\Gamma &\turn \Delta,P}
lcp@104
   118
   \eqno({\neg}L) $$
lcp@104
   119
What about reasoning on the right?  Introduction rules can only affect the
lcp@308
   120
formula in the conclusion, namely~$Q@1$.  The other right-side formulae are
lcp@319
   121
represented as negated assumptions, $\neg Q@2$, \ldots,~$\neg Q@n$.  
lcp@319
   122
\index{assumptions!negated}
lcp@319
   123
In order to operate on one of these, it must first be exchanged with~$Q@1$.
lcp@104
   124
Elim-resolution with the {\bf swap} rule has this effect:
lcp@104
   125
$$ \List{\neg P; \; \neg R\Imp P} \Imp R   \eqno(swap)$$
lcp@104
   126
To ensure that swaps occur only when necessary, each introduction rule is
lcp@104
   127
converted into a swapped form: it is resolved with the second premise
lcp@104
   128
of~$(swap)$.  The swapped form of~$({\conj}I)$, which might be
lcp@104
   129
called~$({\neg\conj}E)$, is
lcp@104
   130
\[ \List{\neg(P\conj Q); \; \neg R\Imp P; \; \neg R\Imp Q} \Imp R. \]
lcp@104
   131
Similarly, the swapped form of~$({\imp}I)$ is
lcp@104
   132
\[ \List{\neg(P\imp Q); \; \List{\neg R;P}\Imp Q} \Imp R  \]
lcp@104
   133
Swapped introduction rules are applied using elim-resolution, which deletes
lcp@104
   134
the negated formula.  Our representation of sequents also requires the use
lcp@104
   135
of ordinary introduction rules.  If we had no regard for readability, we
lcp@104
   136
could treat the right side more uniformly by representing sequents as
lcp@104
   137
\[ \List{P@1;\ldots;P@m; \neg Q@1;\ldots; \neg Q@n}\Imp \bot. \]
lcp@104
   138
lcp@104
   139
lcp@104
   140
\section{Extra rules for the sequent calculus}
lcp@104
   141
As mentioned, destruction rules such as $({\conj}E1,2)$ and $(\forall E)$
lcp@104
   142
must be replaced by sequent-style elimination rules.  In addition, we need
lcp@104
   143
rules to embody the classical equivalence between $P\imp Q$ and $\neg P\disj
lcp@104
   144
Q$.  The introduction rules~$({\disj}I1,2)$ are replaced by a rule that
lcp@104
   145
simulates $({\disj}R)$:
lcp@104
   146
\[ (\neg Q\Imp P) \Imp P\disj Q \]
lcp@104
   147
The destruction rule $({\imp}E)$ is replaced by
lcp@332
   148
\[ \List{P\imp Q;\; \neg P\Imp R;\; Q\Imp R} \Imp R. \]
lcp@104
   149
Quantifier replication also requires special rules.  In classical logic,
lcp@308
   150
$\exists x{.}P$ is equivalent to $\neg\forall x{.}\neg P$; the rules
lcp@308
   151
$(\exists R)$ and $(\forall L)$ are dual:
lcp@104
   152
\[ \ainfer{\Gamma &\turn \Delta, \exists x{.}P}
lcp@104
   153
          {\Gamma &\turn \Delta, \exists x{.}P, P[t/x]} \; (\exists R)
lcp@104
   154
   \qquad
lcp@104
   155
   \ainfer{\forall x{.}P, \Gamma &\turn \Delta}
lcp@104
   156
          {P[t/x], \forall x{.}P, \Gamma &\turn \Delta} \; (\forall L)
lcp@104
   157
\]
lcp@104
   158
Thus both kinds of quantifier may be replicated.  Theorems requiring
lcp@104
   159
multiple uses of a universal formula are easy to invent; consider 
lcp@308
   160
\[ (\forall x.P(x)\imp P(f(x))) \conj P(a) \imp P(f^n(a)), \]
lcp@308
   161
for any~$n>1$.  Natural examples of the multiple use of an existential
lcp@308
   162
formula are rare; a standard one is $\exists x.\forall y. P(x)\imp P(y)$.
lcp@104
   163
lcp@104
   164
Forgoing quantifier replication loses completeness, but gains decidability,
lcp@104
   165
since the search space becomes finite.  Many useful theorems can be proved
lcp@104
   166
without replication, and the search generally delivers its verdict in a
lcp@104
   167
reasonable time.  To adopt this approach, represent the sequent rules
lcp@104
   168
$(\exists R)$, $(\exists L)$ and $(\forall R)$ by $(\exists I)$, $(\exists
lcp@104
   169
E)$ and $(\forall I)$, respectively, and put $(\forall E)$ into elimination
lcp@104
   170
form:
lcp@104
   171
$$ \List{\forall x{.}P(x); P(t)\Imp Q} \Imp Q    \eqno(\forall E@2) $$
lcp@104
   172
Elim-resolution with this rule will delete the universal formula after a
lcp@104
   173
single use.  To replicate universal quantifiers, replace the rule by
lcp@104
   174
$$ \List{\forall x{.}P(x);\; \List{P(t); \forall x{.}P(x)}\Imp Q} \Imp Q.
lcp@104
   175
   \eqno(\forall E@3) $$
lcp@104
   176
To replicate existential quantifiers, replace $(\exists I)$ by
lcp@332
   177
\[ \List{\neg(\exists x{.}P(x)) \Imp P(t)} \Imp \exists x{.}P(x). \]
lcp@104
   178
All introduction rules mentioned above are also useful in swapped form.
lcp@104
   179
lcp@104
   180
Replication makes the search space infinite; we must apply the rules with
lcp@286
   181
care.  The classical reasoner distinguishes between safe and unsafe
lcp@104
   182
rules, applying the latter only when there is no alternative.  Depth-first
lcp@104
   183
search may well go down a blind alley; best-first search is better behaved
lcp@104
   184
in an infinite search space.  However, quantifier replication is too
lcp@104
   185
expensive to prove any but the simplest theorems.
lcp@104
   186
lcp@104
   187
lcp@104
   188
\section{Classical rule sets}
lcp@319
   189
\index{classical sets}
lcp@319
   190
Each automatic tactic takes a {\bf classical set} --- a collection of
lcp@104
   191
rules, classified as introduction or elimination and as {\bf safe} or {\bf
lcp@104
   192
unsafe}.  In general, safe rules can be attempted blindly, while unsafe
lcp@104
   193
rules must be used with care.  A safe rule must never reduce a provable
lcp@308
   194
goal to an unprovable set of subgoals.  
lcp@104
   195
lcp@308
   196
The rule~$({\disj}I1)$ is unsafe because it reduces $P\disj Q$ to~$P$.  Any
lcp@308
   197
rule is unsafe whose premises contain new unknowns.  The elimination
lcp@308
   198
rule~$(\forall E@2)$ is unsafe, since it is applied via elim-resolution,
lcp@308
   199
which discards the assumption $\forall x{.}P(x)$ and replaces it by the
lcp@308
   200
weaker assumption~$P(\Var{t})$.  The rule $({\exists}I)$ is unsafe for
lcp@308
   201
similar reasons.  The rule~$(\forall E@3)$ is unsafe in a different sense:
lcp@308
   202
since it keeps the assumption $\forall x{.}P(x)$, it is prone to looping.
lcp@308
   203
In classical first-order logic, all rules are safe except those mentioned
lcp@308
   204
above.
lcp@104
   205
lcp@104
   206
The safe/unsafe distinction is vague, and may be regarded merely as a way
lcp@104
   207
of giving some rules priority over others.  One could argue that
lcp@104
   208
$({\disj}E)$ is unsafe, because repeated application of it could generate
lcp@104
   209
exponentially many subgoals.  Induction rules are unsafe because inductive
lcp@104
   210
proofs are difficult to set up automatically.  Any inference is unsafe that
lcp@104
   211
instantiates an unknown in the proof state --- thus \ttindex{match_tac}
lcp@104
   212
must be used, rather than \ttindex{resolve_tac}.  Even proof by assumption
lcp@104
   213
is unsafe if it instantiates unknowns shared with other subgoals --- thus
lcp@104
   214
\ttindex{eq_assume_tac} must be used, rather than \ttindex{assume_tac}.
lcp@104
   215
lcp@1099
   216
\subsection{Adding rules to classical sets}
lcp@319
   217
Classical rule sets belong to the abstract type \mltydx{claset}, which
lcp@286
   218
supports the following operations (provided the classical reasoner is
lcp@104
   219
installed!):
lcp@104
   220
\begin{ttbox} 
lcp@1099
   221
empty_cs    : claset
lcp@1099
   222
print_cs    : claset -> unit
lcp@1099
   223
addSIs      : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@1099
   224
addSEs      : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@1099
   225
addSDs      : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@1099
   226
addIs       : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@1099
   227
addEs       : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@1099
   228
addDs       : claset * thm list -> claset                 \hfill{\bf infix 4}
berghofe@1869
   229
delrules    : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@104
   230
\end{ttbox}
berghofe@1869
   231
The add operations do not check for repetitions.
lcp@308
   232
\begin{ttdescription}
lcp@104
   233
\item[\ttindexbold{empty_cs}] is the empty classical set.
lcp@104
   234
lcp@1099
   235
\item[\ttindexbold{print_cs} $cs$] prints the rules of~$cs$.
lcp@1099
   236
lcp@308
   237
\item[$cs$ addSIs $rules$] \indexbold{*addSIs}
lcp@308
   238
adds safe introduction~$rules$ to~$cs$.
lcp@104
   239
lcp@308
   240
\item[$cs$ addSEs $rules$] \indexbold{*addSEs}
lcp@308
   241
adds safe elimination~$rules$ to~$cs$.
lcp@104
   242
lcp@308
   243
\item[$cs$ addSDs $rules$] \indexbold{*addSDs}
lcp@308
   244
adds safe destruction~$rules$ to~$cs$.
lcp@104
   245
lcp@308
   246
\item[$cs$ addIs $rules$] \indexbold{*addIs}
lcp@308
   247
adds unsafe introduction~$rules$ to~$cs$.
lcp@104
   248
lcp@308
   249
\item[$cs$ addEs $rules$] \indexbold{*addEs}
lcp@308
   250
adds unsafe elimination~$rules$ to~$cs$.
lcp@104
   251
lcp@308
   252
\item[$cs$ addDs $rules$] \indexbold{*addDs}
lcp@308
   253
adds unsafe destruction~$rules$ to~$cs$.
berghofe@1869
   254
berghofe@1869
   255
\item[$cs$ delrules $rules$] \indexbold{*delrules}
berghofe@1869
   256
deletes~$rules$ from~$cs$.
lcp@308
   257
\end{ttdescription}
lcp@308
   258
lcp@104
   259
Introduction rules are those that can be applied using ordinary resolution.
lcp@104
   260
The classical set automatically generates their swapped forms, which will
lcp@104
   261
be applied using elim-resolution.  Elimination rules are applied using
lcp@286
   262
elim-resolution.  In a classical set, rules are sorted by the number of new
lcp@286
   263
subgoals they will yield; rules that generate the fewest subgoals will be
lcp@286
   264
tried first (see \S\ref{biresolve_tac}).
lcp@104
   265
lcp@1099
   266
lcp@1099
   267
\subsection{Modifying the search step}
lcp@104
   268
For a given classical set, the proof strategy is simple.  Perform as many
lcp@104
   269
safe inferences as possible; or else, apply certain safe rules, allowing
lcp@104
   270
instantiation of unknowns; or else, apply an unsafe rule.  The tactics may
lcp@319
   271
also apply {\tt hyp_subst_tac}, if they have been set up to do so (see
lcp@104
   272
below).  They may perform a form of Modus Ponens: if there are assumptions
lcp@104
   273
$P\imp Q$ and~$P$, then replace $P\imp Q$ by~$Q$.
lcp@104
   274
lcp@1099
   275
The classical reasoner allows you to modify this basic proof strategy by
oheimb@2632
   276
applying two arbitrary {\bf wrapper tacticals} to it. This affects each step of
oheimb@2631
   277
the search.  Usually they are the identity tacticals, but they could apply 
oheimb@2631
   278
another tactic before or after the step tactic. The first one, which is
oheimb@2631
   279
considered to be safe, affects \ttindex{safe_step_tac} and all the tactics that
oheimb@2631
   280
call it. The the second one, which may be unsafe, affects 
oheimb@2632
   281
\ttindex{step_tac}, \ttindex{slow_step_tac} and the tactics that call them.
lcp@1099
   282
lcp@1099
   283
\begin{ttbox} 
oheimb@2632
   284
addss        : claset * simpset -> claset                 \hfill{\bf infix 4}
oheimb@2632
   285
addSbefore   : claset *  (int -> tactic)  -> claset       \hfill{\bf infix 4}
oheimb@2632
   286
addSaltern   : claset *  (int -> tactic)  -> claset       \hfill{\bf infix 4}
oheimb@2632
   287
setSWrapper  : claset * ((int -> tactic) -> 
oheimb@2632
   288
                         (int -> tactic)) -> claset       \hfill{\bf infix 4}
oheimb@2632
   289
compSWrapper : claset * ((int -> tactic) -> 
oheimb@2632
   290
                         (int -> tactic)) -> claset       \hfill{\bf infix 4}
oheimb@2632
   291
addbefore    : claset *  (int -> tactic)  -> claset       \hfill{\bf infix 4}
oheimb@2632
   292
addaltern    : claset *  (int -> tactic)  -> claset       \hfill{\bf infix 4}
oheimb@2632
   293
setWrapper   : claset * ((int -> tactic) -> 
oheimb@2632
   294
                         (int -> tactic)) -> claset       \hfill{\bf infix 4}
oheimb@2632
   295
compWrapper  : claset * ((int -> tactic) -> 
oheimb@2632
   296
                         (int -> tactic)) -> claset       \hfill{\bf infix 4}
lcp@1099
   297
\end{ttbox}
lcp@1099
   298
%
lcp@1099
   299
\index{simplification!from classical reasoner} 
oheimb@2631
   300
The wrapper tacticals underly the operator \ttindex{addss}, which combines
lcp@1099
   301
each search step by simplification.  Strictly speaking, {\tt addss} is not
oheimb@2631
   302
part of the classical reasoner.  It should be defined (using {\tt addSaltern (CHANGED o (safe_asm_more_full_simp_tac ss)}) when the simplifier is installed.
lcp@1099
   303
lcp@1099
   304
\begin{ttdescription}
lcp@1099
   305
\item[$cs$ addss $ss$] \indexbold{*addss}
oheimb@2631
   306
adds the simpset~$ss$ to the classical set. The assumptions and goal will be
oheimb@2631
   307
simplified, in a safe way, after the safe steps of the search.
oheimb@2631
   308
oheimb@2631
   309
\item[$cs$ addSbefore $tac$] \indexbold{*addSbefore}
oheimb@2631
   310
changes the safe wrapper tactical to apply the given tactic {\em before}
oheimb@2631
   311
each safe step of the search.
oheimb@2631
   312
oheimb@2631
   313
\item[$cs$ addSaltern $tac$] \indexbold{*addSaltern}
oheimb@2631
   314
changes the safe wrapper tactical to apply the given tactic when a safe step 
oheimb@2631
   315
of the search would fail.
oheimb@2631
   316
oheimb@2631
   317
\item[$cs$ setSWrapper $tactical$] \indexbold{*setSWrapper}
oheimb@2631
   318
specifies a new safe wrapper tactical.  
oheimb@2631
   319
oheimb@2631
   320
\item[$cs$ compSWrapper $tactical$] \indexbold{*compSWrapper}
oheimb@2631
   321
composes the $tactical$ with the existing safe wrapper tactical, 
oheimb@2631
   322
to combine their effects. 
lcp@1099
   323
lcp@1099
   324
\item[$cs$ addbefore $tac$] \indexbold{*addbefore}
oheimb@2631
   325
changes the (unsafe) wrapper tactical to apply the given tactic, which should
oheimb@2631
   326
be safe, {\em before} each step of the search.
lcp@1099
   327
oheimb@2631
   328
\item[$cs$ addaltern $tac$] \indexbold{*addaltern}
oheimb@2631
   329
changes the (unsafe) wrapper tactical to apply the given tactic 
oheimb@2631
   330
{\em alternatively} after each step of the search.
lcp@1099
   331
oheimb@2631
   332
\item[$cs$ setWrapper $tactical$] \indexbold{*setWrapper}
oheimb@2631
   333
specifies a new (unsafe) wrapper tactical.  
lcp@1099
   334
oheimb@2631
   335
\item[$cs$ compWrapper $tactical$] \indexbold{*compWrapper}
oheimb@2631
   336
composes the $tactical$ with the existing (unsafe) wrapper tactical, 
oheimb@2631
   337
to combine their effects. 
lcp@1099
   338
\end{ttdescription}
lcp@1099
   339
lcp@104
   340
lcp@104
   341
\section{The classical tactics}
lcp@319
   342
\index{classical reasoner!tactics}
lcp@104
   343
If installed, the classical module provides several tactics (and other
lcp@104
   344
operations) for simulating the classical sequent calculus.
lcp@104
   345
lcp@332
   346
\subsection{The automatic tactics}
lcp@332
   347
\begin{ttbox} 
lcp@875
   348
fast_tac      : claset -> int -> tactic
lcp@875
   349
best_tac      : claset -> int -> tactic
lcp@875
   350
slow_tac      : claset -> int -> tactic
lcp@875
   351
slow_best_tac : claset -> int -> tactic
lcp@332
   352
\end{ttbox}
lcp@875
   353
These tactics work by applying {\tt step_tac} or {\tt slow_step_tac}
lcp@875
   354
repeatedly.  Their effect is restricted (by {\tt SELECT_GOAL}) to one subgoal;
lcp@875
   355
they either solve this subgoal or fail.  The {\tt slow_} versions are more
lcp@875
   356
powerful but can be much slower.  
lcp@875
   357
lcp@875
   358
The best-first tactics are guided by a heuristic function: typically, the
lcp@875
   359
total size of the proof state.  This function is supplied in the functor call
lcp@875
   360
that sets up the classical reasoner.
lcp@332
   361
\begin{ttdescription}
lcp@332
   362
\item[\ttindexbold{fast_tac} $cs$ $i$] applies {\tt step_tac} using
lcp@332
   363
depth-first search, to solve subgoal~$i$.
lcp@332
   364
lcp@332
   365
\item[\ttindexbold{best_tac} $cs$ $i$] applies {\tt step_tac} using
lcp@875
   366
best-first search, to solve subgoal~$i$.
lcp@875
   367
lcp@875
   368
\item[\ttindexbold{slow_tac} $cs$ $i$] applies {\tt slow_step_tac} using
lcp@875
   369
depth-first search, to solve subgoal~$i$.
lcp@875
   370
lcp@875
   371
\item[\ttindexbold{slow_best_tac} $cs$ $i$] applies {\tt slow_step_tac} using
lcp@875
   372
best-first search, to solve subgoal~$i$.
lcp@875
   373
\end{ttdescription}
lcp@875
   374
lcp@875
   375
lcp@875
   376
\subsection{Depth-limited tactics}
lcp@875
   377
\begin{ttbox} 
lcp@875
   378
depth_tac  : claset -> int -> int -> tactic
lcp@875
   379
deepen_tac : claset -> int -> int -> tactic
lcp@875
   380
\end{ttbox}
lcp@875
   381
These work by exhaustive search up to a specified depth.  Unsafe rules are
lcp@875
   382
modified to preserve the formula they act on, so that it be used repeatedly.
lcp@1099
   383
They can prove more goals than {\tt fast_tac} can but are much
lcp@875
   384
slower, for example if the assumptions have many universal quantifiers.
lcp@875
   385
lcp@875
   386
The depth limits the number of unsafe steps.  If you can estimate the minimum
lcp@875
   387
number of unsafe steps needed, supply this value as~$m$ to save time.
lcp@875
   388
\begin{ttdescription}
lcp@875
   389
\item[\ttindexbold{depth_tac} $cs$ $m$ $i$] 
lcp@875
   390
tries to solve subgoal~$i$ by exhaustive search up to depth~$m$.
lcp@875
   391
lcp@875
   392
\item[\ttindexbold{deepen_tac} $cs$ $m$ $i$] 
lcp@875
   393
tries to solve subgoal~$i$ by iterative deepening.  It calls {\tt depth_tac}
lcp@875
   394
repeatedly with increasing depths, starting with~$m$.
lcp@332
   395
\end{ttdescription}
lcp@332
   396
lcp@332
   397
lcp@104
   398
\subsection{Single-step tactics}
lcp@104
   399
\begin{ttbox} 
lcp@104
   400
safe_step_tac : claset -> int -> tactic
lcp@104
   401
safe_tac      : claset        -> tactic
lcp@104
   402
inst_step_tac : claset -> int -> tactic
lcp@104
   403
step_tac      : claset -> int -> tactic
lcp@104
   404
slow_step_tac : claset -> int -> tactic
lcp@104
   405
\end{ttbox}
lcp@104
   406
The automatic proof procedures call these tactics.  By calling them
lcp@104
   407
yourself, you can execute these procedures one step at a time.
lcp@308
   408
\begin{ttdescription}
lcp@104
   409
\item[\ttindexbold{safe_step_tac} $cs$ $i$] performs a safe step on
oheimb@2631
   410
subgoal~$i$. The safe wrapper tactical is applied to a tactic that may include 
oheimb@2631
   411
proof by assumption or Modus Ponens (taking care not to instantiate unknowns), 
oheimb@2631
   412
or {\tt hyp_subst_tac}.
lcp@104
   413
lcp@104
   414
\item[\ttindexbold{safe_tac} $cs$] repeatedly performs safe steps on all 
lcp@104
   415
subgoals.  It is deterministic, with at most one outcome.  If the automatic
lcp@104
   416
tactics fail, try using {\tt safe_tac} to open up your formula; then you
lcp@104
   417
can replicate certain quantifiers explicitly by applying appropriate rules.
lcp@104
   418
lcp@104
   419
\item[\ttindexbold{inst_step_tac} $cs$ $i$] is like {\tt safe_step_tac},
lcp@104
   420
but allows unknowns to be instantiated.
lcp@104
   421
lcp@1099
   422
\item[\ttindexbold{step_tac} $cs$ $i$] is the basic step of the proof
oheimb@2631
   423
  procedure.  The (unsafe) wrapper tactical is applied to a tactic that tries
oheimb@2631
   424
 {\tt safe_tac}, {\tt inst_step_tac}, or applies an unsafe rule from~$cs$.
lcp@104
   425
lcp@104
   426
\item[\ttindexbold{slow_step_tac}] 
lcp@104
   427
  resembles {\tt step_tac}, but allows backtracking between using safe
lcp@104
   428
  rules with instantiation ({\tt inst_step_tac}) and using unsafe rules.
lcp@875
   429
  The resulting search space is larger.
lcp@308
   430
\end{ttdescription}
lcp@104
   431
berghofe@1869
   432
\subsection{The current claset}
paulson@2479
   433
Some logics (\FOL, {\HOL} and \ZF) support the concept of a current
paulson@2479
   434
claset\index{claset!current}.  This is a default set of classical rules.  The
paulson@2479
   435
underlying idea is quite similar to that of a current simpset described in
paulson@2479
   436
\S\ref{sec:simp-for-dummies}; please read that section, including its
paulson@2479
   437
warnings.  Just like simpsets, clasets can be associated with theories.  The
paulson@2479
   438
tactics
berghofe@1869
   439
\begin{ttbox}
berghofe@1869
   440
Step_tac     : int -> tactic
berghofe@1869
   441
Fast_tac     : int -> tactic
berghofe@1869
   442
Best_tac     : int -> tactic
berghofe@1869
   443
Deepen_tac   : int -> int -> tactic
berghofe@1869
   444
\end{ttbox}
berghofe@1869
   445
\indexbold{*Step_tac} \indexbold{*Best_tac} \indexbold{*Fast_tac}
berghofe@1869
   446
\indexbold{*Deepen_tac} 
berghofe@1869
   447
make use of the current claset. E.g.~{\tt Fast_tac} is defined as follows:
berghofe@1869
   448
\begin{ttbox}
berghofe@1869
   449
fun Fast_tac i = fast_tac (!claset) i;
berghofe@1869
   450
\end{ttbox}
berghofe@1869
   451
where \ttindex{!claset} is the current claset.
berghofe@1869
   452
The functions
berghofe@1869
   453
\begin{ttbox}
berghofe@1869
   454
AddSIs, AddSEs, AddSDs, AddIs, AddEs, AddDs: thm list -> unit
berghofe@1869
   455
\end{ttbox}
berghofe@1869
   456
\indexbold{*AddSIs} \indexbold{*AddSEs} \indexbold{*AddSDs}
berghofe@1869
   457
\indexbold{*AddIs} \indexbold{*AddEs} \indexbold{*AddDs}
berghofe@1869
   458
are used to add rules to the current claset. They work exactly like their
berghofe@1869
   459
lower case counterparts {\tt addSIs} etc.
berghofe@1869
   460
\begin{ttbox}
berghofe@1869
   461
Delrules : thm list -> unit
berghofe@1869
   462
\end{ttbox}
berghofe@1869
   463
deletes rules from the current claset. You do not need to worry via which of
berghofe@1869
   464
the above {\tt Add} functions the rule was initially added.
lcp@104
   465
lcp@104
   466
\subsection{Other useful tactics}
lcp@319
   467
\index{tactics!for contradiction}
lcp@319
   468
\index{tactics!for Modus Ponens}
lcp@104
   469
\begin{ttbox} 
lcp@104
   470
contr_tac    :             int -> tactic
lcp@104
   471
mp_tac       :             int -> tactic
lcp@104
   472
eq_mp_tac    :             int -> tactic
lcp@104
   473
swap_res_tac : thm list -> int -> tactic
lcp@104
   474
\end{ttbox}
lcp@104
   475
These can be used in the body of a specialized search.
lcp@308
   476
\begin{ttdescription}
lcp@319
   477
\item[\ttindexbold{contr_tac} {\it i}]\index{assumptions!contradictory}
lcp@319
   478
  solves subgoal~$i$ by detecting a contradiction among two assumptions of
lcp@319
   479
  the form $P$ and~$\neg P$, or fail.  It may instantiate unknowns.  The
lcp@319
   480
  tactic can produce multiple outcomes, enumerating all possible
lcp@319
   481
  contradictions.
lcp@104
   482
lcp@104
   483
\item[\ttindexbold{mp_tac} {\it i}] 
lcp@104
   484
is like {\tt contr_tac}, but also attempts to perform Modus Ponens in
lcp@104
   485
subgoal~$i$.  If there are assumptions $P\imp Q$ and~$P$, then it replaces
lcp@104
   486
$P\imp Q$ by~$Q$.  It may instantiate unknowns.  It fails if it can do
lcp@104
   487
nothing.
lcp@104
   488
lcp@104
   489
\item[\ttindexbold{eq_mp_tac} {\it i}] 
lcp@104
   490
is like {\tt mp_tac} {\it i}, but may not instantiate unknowns --- thus, it
lcp@104
   491
is safe.
lcp@104
   492
lcp@104
   493
\item[\ttindexbold{swap_res_tac} {\it thms} {\it i}] refines subgoal~$i$ of
lcp@104
   494
the proof state using {\it thms}, which should be a list of introduction
lcp@319
   495
rules.  First, it attempts to solve the goal using {\tt assume_tac} or
lcp@104
   496
{\tt contr_tac}.  It then attempts to apply each rule in turn, attempting
lcp@104
   497
resolution and also elim-resolution with the swapped form.
lcp@308
   498
\end{ttdescription}
lcp@104
   499
lcp@104
   500
\subsection{Creating swapped rules}
lcp@104
   501
\begin{ttbox} 
lcp@104
   502
swapify   : thm list -> thm list
lcp@104
   503
joinrules : thm list * thm list -> (bool * thm) list
lcp@104
   504
\end{ttbox}
lcp@308
   505
\begin{ttdescription}
lcp@104
   506
\item[\ttindexbold{swapify} {\it thms}] returns a list consisting of the
lcp@104
   507
swapped versions of~{\it thms}, regarded as introduction rules.
lcp@104
   508
lcp@308
   509
\item[\ttindexbold{joinrules} ({\it intrs}, {\it elims})]
lcp@104
   510
joins introduction rules, their swapped versions, and elimination rules for
lcp@104
   511
use with \ttindex{biresolve_tac}.  Each rule is paired with~{\tt false}
lcp@104
   512
(indicating ordinary resolution) or~{\tt true} (indicating
lcp@104
   513
elim-resolution).
lcp@308
   514
\end{ttdescription}
lcp@104
   515
lcp@104
   516
lcp@286
   517
\section{Setting up the classical reasoner}
lcp@319
   518
\index{classical reasoner!setting up}
lcp@104
   519
Isabelle's classical object-logics, including {\tt FOL} and {\tt HOL}, have
lcp@286
   520
the classical reasoner already set up.  When defining a new classical logic,
lcp@286
   521
you should set up the reasoner yourself.  It consists of the \ML{} functor
lcp@104
   522
\ttindex{ClassicalFun}, which takes the argument
lcp@319
   523
signature {\tt CLASSICAL_DATA}:
lcp@104
   524
\begin{ttbox} 
lcp@104
   525
signature CLASSICAL_DATA =
lcp@104
   526
  sig
lcp@104
   527
  val mp             : thm
lcp@104
   528
  val not_elim       : thm
lcp@104
   529
  val swap           : thm
lcp@104
   530
  val sizef          : thm -> int
lcp@104
   531
  val hyp_subst_tacs : (int -> tactic) list
lcp@104
   532
  end;
lcp@104
   533
\end{ttbox}
lcp@104
   534
Thus, the functor requires the following items:
lcp@308
   535
\begin{ttdescription}
lcp@319
   536
\item[\tdxbold{mp}] should be the Modus Ponens rule
lcp@104
   537
$\List{\Var{P}\imp\Var{Q};\; \Var{P}} \Imp \Var{Q}$.
lcp@104
   538
lcp@319
   539
\item[\tdxbold{not_elim}] should be the contradiction rule
lcp@104
   540
$\List{\neg\Var{P};\; \Var{P}} \Imp \Var{R}$.
lcp@104
   541
lcp@319
   542
\item[\tdxbold{swap}] should be the swap rule
lcp@104
   543
$\List{\neg \Var{P}; \; \neg \Var{R}\Imp \Var{P}} \Imp \Var{R}$.
lcp@104
   544
lcp@104
   545
\item[\ttindexbold{sizef}] is the heuristic function used for best-first
lcp@104
   546
search.  It should estimate the size of the remaining subgoals.  A good
lcp@104
   547
heuristic function is \ttindex{size_of_thm}, which measures the size of the
lcp@104
   548
proof state.  Another size function might ignore certain subgoals (say,
lcp@104
   549
those concerned with type checking).  A heuristic function might simply
lcp@104
   550
count the subgoals.
lcp@104
   551
lcp@319
   552
\item[\ttindexbold{hyp_subst_tacs}] is a list of tactics for substitution in
lcp@104
   553
the hypotheses, typically created by \ttindex{HypsubstFun} (see
lcp@104
   554
Chapter~\ref{substitution}).  This list can, of course, be empty.  The
lcp@104
   555
tactics are assumed to be safe!
lcp@308
   556
\end{ttdescription}
lcp@104
   557
The functor is not at all sensitive to the formalization of the
lcp@104
   558
object-logic.  It does not even examine the rules, but merely applies them
lcp@104
   559
according to its fixed strategy.  The functor resides in {\tt
lcp@319
   560
Provers/classical.ML} in the Isabelle distribution directory.
lcp@104
   561
lcp@319
   562
\index{classical reasoner|)}