doc-src/Ref/classical.tex
author lcp
Fri, 15 Apr 1994 16:29:48 +0200
changeset 319 f143f7686cd6
parent 308 f4cecad9b6a0
child 332 01b87a921967
permissions -rw-r--r--
penultimate Springer draft
lcp@104
     1
%% $Id$
lcp@319
     2
\chapter{The Classical Reasoner}\label{chap:classical}
lcp@286
     3
\index{classical reasoner|(}
lcp@308
     4
\newcommand\ainfer[2]{\begin{array}{r@{\,}l}#2\\ \hline#1\end{array}}
lcp@308
     5
lcp@104
     6
Although Isabelle is generic, many users will be working in some extension
lcp@308
     7
of classical first-order logic.  Isabelle's set theory~{\tt ZF} is built
lcp@308
     8
upon theory~{\tt FOL}, while higher-order logic contains first-order logic
lcp@308
     9
as a fragment.  Theorem-proving in predicate logic is undecidable, but many
lcp@308
    10
researchers have developed strategies to assist in this task.
lcp@104
    11
lcp@286
    12
Isabelle's classical reasoner is an \ML{} functor that accepts certain
lcp@104
    13
information about a logic and delivers a suite of automatic tactics.  Each
lcp@104
    14
tactic takes a collection of rules and executes a simple, non-clausal proof
lcp@104
    15
procedure.  They are slow and simplistic compared with resolution theorem
lcp@104
    16
provers, but they can save considerable time and effort.  They can prove
lcp@104
    17
theorems such as Pelletier's~\cite{pelletier86} problems~40 and~41 in
lcp@104
    18
seconds:
lcp@104
    19
\[ (\exists y. \forall x. J(y,x) \bimp \neg J(x,x))  
lcp@104
    20
   \imp  \neg (\forall x. \exists y. \forall z. J(z,y) \bimp \neg J(z,x)) \]
lcp@104
    21
\[ (\forall z. \exists y. \forall x. F(x,y) \bimp F(x,z) \conj \neg F(x,x))
lcp@104
    22
   \imp \neg (\exists z. \forall x. F(x,z))  
lcp@104
    23
\]
lcp@308
    24
%
lcp@308
    25
The tactics are generic.  They are not restricted to first-order logic, and
lcp@308
    26
have been heavily used in the development of Isabelle's set theory.  Few
lcp@308
    27
interactive proof assistants provide this much automation.  The tactics can
lcp@308
    28
be traced, and their components can be called directly; in this manner,
lcp@308
    29
any proof can be viewed interactively.
lcp@104
    30
lcp@308
    31
The logics {\tt FOL}, {\tt HOL} and {\tt ZF} have the classical reasoner
lcp@308
    32
already installed.  We shall first consider how to use it, then see how to
lcp@308
    33
instantiate it for new logics.
lcp@104
    34
lcp@104
    35
lcp@104
    36
\section{The sequent calculus}
lcp@104
    37
\index{sequent calculus}
lcp@104
    38
Isabelle supports natural deduction, which is easy to use for interactive
lcp@104
    39
proof.  But natural deduction does not easily lend itself to automation,
lcp@104
    40
and has a bias towards intuitionism.  For certain proofs in classical
lcp@104
    41
logic, it can not be called natural.  The {\bf sequent calculus}, a
lcp@104
    42
generalization of natural deduction, is easier to automate.
lcp@104
    43
lcp@104
    44
A {\bf sequent} has the form $\Gamma\turn\Delta$, where $\Gamma$
lcp@308
    45
and~$\Delta$ are sets of formulae.%
lcp@308
    46
\footnote{For first-order logic, sequents can equivalently be made from
lcp@308
    47
  lists or multisets of formulae.} The sequent
lcp@104
    48
\[ P@1,\ldots,P@m\turn Q@1,\ldots,Q@n \]
lcp@104
    49
is {\bf valid} if $P@1\conj\ldots\conj P@m$ implies $Q@1\disj\ldots\disj
lcp@104
    50
Q@n$.  Thus $P@1,\ldots,P@m$ represent assumptions, each of which is true,
lcp@104
    51
while $Q@1,\ldots,Q@n$ represent alternative goals.  A sequent is {\bf
lcp@104
    52
basic} if its left and right sides have a common formula, as in $P,Q\turn
lcp@104
    53
Q,R$; basic sequents are trivially valid.
lcp@104
    54
lcp@104
    55
Sequent rules are classified as {\bf right} or {\bf left}, indicating which
lcp@104
    56
side of the $\turn$~symbol they operate on.  Rules that operate on the
lcp@104
    57
right side are analogous to natural deduction's introduction rules, and
lcp@308
    58
left rules are analogous to elimination rules.  
lcp@308
    59
Recall the natural deduction rules for
lcp@308
    60
  first-order logic, 
lcp@308
    61
\iflabelundefined{fol-fig}{from {\it Introduction to Isabelle}}%
lcp@308
    62
                          {Fig.\ts\ref{fol-fig}}.
lcp@308
    63
The sequent calculus analogue of~$({\imp}I)$ is the rule
lcp@104
    64
$$ \ainfer{\Gamma &\turn \Delta, P\imp Q}{P,\Gamma &\turn \Delta,Q}
lcp@104
    65
   \eqno({\imp}R) $$
lcp@104
    66
This breaks down some implication on the right side of a sequent; $\Gamma$
lcp@104
    67
and $\Delta$ stand for the sets of formulae that are unaffected by the
lcp@104
    68
inference.  The analogue of the pair~$({\disj}I1)$ and~$({\disj}I2)$ is the
lcp@104
    69
single rule 
lcp@104
    70
$$ \ainfer{\Gamma &\turn \Delta, P\disj Q}{\Gamma &\turn \Delta,P,Q}
lcp@104
    71
   \eqno({\disj}R) $$
lcp@104
    72
This breaks down some disjunction on the right side, replacing it by both
lcp@104
    73
disjuncts.  Thus, the sequent calculus is a kind of multiple-conclusion logic.
lcp@104
    74
lcp@104
    75
To illustrate the use of multiple formulae on the right, let us prove
lcp@104
    76
the classical theorem $(P\imp Q)\disj(Q\imp P)$.  Working backwards, we
lcp@104
    77
reduce this formula to a basic sequent:
lcp@104
    78
\[ \infer[(\disj)R]{\turn(P\imp Q)\disj(Q\imp P)}
lcp@104
    79
   {\infer[(\imp)R]{\turn(P\imp Q), (Q\imp P)\;}
lcp@104
    80
    {\infer[(\imp)R]{P \turn Q, (Q\imp P)\qquad}
lcp@104
    81
                    {P, Q \turn Q, P\qquad\qquad}}}
lcp@104
    82
\]
lcp@104
    83
This example is typical of the sequent calculus: start with the desired
lcp@104
    84
theorem and apply rules backwards in a fairly arbitrary manner.  This yields a
lcp@104
    85
surprisingly effective proof procedure.  Quantifiers add few complications,
lcp@104
    86
since Isabelle handles parameters and schematic variables.  See Chapter~10
lcp@104
    87
of {\em ML for the Working Programmer}~\cite{paulson91} for further
lcp@104
    88
discussion.
lcp@104
    89
lcp@104
    90
lcp@104
    91
\section{Simulating sequents by natural deduction}
lcp@308
    92
Isabelle can represent sequents directly, as in the object-logic~{\tt LK}\@.
lcp@104
    93
But natural deduction is easier to work with, and most object-logics employ
lcp@104
    94
it.  Fortunately, we can simulate the sequent $P@1,\ldots,P@m\turn
lcp@104
    95
Q@1,\ldots,Q@n$ by the Isabelle formula
lcp@104
    96
\[ \List{P@1;\ldots;P@m; \neg Q@2;\ldots; \neg Q@n}\Imp Q@1, \]
lcp@104
    97
where the order of the assumptions and the choice of~$Q@1$ are arbitrary.
lcp@104
    98
Elim-resolution plays a key role in simulating sequent proofs.
lcp@104
    99
lcp@104
   100
We can easily handle reasoning on the left.
lcp@308
   101
As discussed in
lcp@308
   102
\iflabelundefined{destruct}{{\it Introduction to Isabelle}}{\S\ref{destruct}}, 
lcp@104
   103
elim-resolution with the rules $(\disj E)$, $(\bot E)$ and $(\exists E)$
lcp@104
   104
achieves a similar effect as the corresponding sequent rules.  For the
lcp@104
   105
other connectives, we use sequent-style elimination rules instead of
lcp@308
   106
destruction rules such as $({\conj}E1,2)$ and $(\forall E)$.  But note that
lcp@308
   107
the rule $(\neg L)$ has no effect under our representation of sequents!
lcp@104
   108
$$ \ainfer{\neg P,\Gamma &\turn \Delta}{\Gamma &\turn \Delta,P}
lcp@104
   109
   \eqno({\neg}L) $$
lcp@104
   110
What about reasoning on the right?  Introduction rules can only affect the
lcp@308
   111
formula in the conclusion, namely~$Q@1$.  The other right-side formulae are
lcp@319
   112
represented as negated assumptions, $\neg Q@2$, \ldots,~$\neg Q@n$.  
lcp@319
   113
\index{assumptions!negated}
lcp@319
   114
In order to operate on one of these, it must first be exchanged with~$Q@1$.
lcp@104
   115
Elim-resolution with the {\bf swap} rule has this effect:
lcp@104
   116
$$ \List{\neg P; \; \neg R\Imp P} \Imp R   \eqno(swap)$$
lcp@104
   117
To ensure that swaps occur only when necessary, each introduction rule is
lcp@104
   118
converted into a swapped form: it is resolved with the second premise
lcp@104
   119
of~$(swap)$.  The swapped form of~$({\conj}I)$, which might be
lcp@104
   120
called~$({\neg\conj}E)$, is
lcp@104
   121
\[ \List{\neg(P\conj Q); \; \neg R\Imp P; \; \neg R\Imp Q} \Imp R. \]
lcp@104
   122
Similarly, the swapped form of~$({\imp}I)$ is
lcp@104
   123
\[ \List{\neg(P\imp Q); \; \List{\neg R;P}\Imp Q} \Imp R  \]
lcp@104
   124
Swapped introduction rules are applied using elim-resolution, which deletes
lcp@104
   125
the negated formula.  Our representation of sequents also requires the use
lcp@104
   126
of ordinary introduction rules.  If we had no regard for readability, we
lcp@104
   127
could treat the right side more uniformly by representing sequents as
lcp@104
   128
\[ \List{P@1;\ldots;P@m; \neg Q@1;\ldots; \neg Q@n}\Imp \bot. \]
lcp@104
   129
lcp@104
   130
lcp@104
   131
\section{Extra rules for the sequent calculus}
lcp@104
   132
As mentioned, destruction rules such as $({\conj}E1,2)$ and $(\forall E)$
lcp@104
   133
must be replaced by sequent-style elimination rules.  In addition, we need
lcp@104
   134
rules to embody the classical equivalence between $P\imp Q$ and $\neg P\disj
lcp@104
   135
Q$.  The introduction rules~$({\disj}I1,2)$ are replaced by a rule that
lcp@104
   136
simulates $({\disj}R)$:
lcp@104
   137
\[ (\neg Q\Imp P) \Imp P\disj Q \]
lcp@104
   138
The destruction rule $({\imp}E)$ is replaced by
lcp@104
   139
\[ \List{P\imp Q;\; \neg P\imp R;\; Q\imp R} \Imp R. \]
lcp@104
   140
Quantifier replication also requires special rules.  In classical logic,
lcp@308
   141
$\exists x{.}P$ is equivalent to $\neg\forall x{.}\neg P$; the rules
lcp@308
   142
$(\exists R)$ and $(\forall L)$ are dual:
lcp@104
   143
\[ \ainfer{\Gamma &\turn \Delta, \exists x{.}P}
lcp@104
   144
          {\Gamma &\turn \Delta, \exists x{.}P, P[t/x]} \; (\exists R)
lcp@104
   145
   \qquad
lcp@104
   146
   \ainfer{\forall x{.}P, \Gamma &\turn \Delta}
lcp@104
   147
          {P[t/x], \forall x{.}P, \Gamma &\turn \Delta} \; (\forall L)
lcp@104
   148
\]
lcp@104
   149
Thus both kinds of quantifier may be replicated.  Theorems requiring
lcp@104
   150
multiple uses of a universal formula are easy to invent; consider 
lcp@308
   151
\[ (\forall x.P(x)\imp P(f(x))) \conj P(a) \imp P(f^n(a)), \]
lcp@308
   152
for any~$n>1$.  Natural examples of the multiple use of an existential
lcp@308
   153
formula are rare; a standard one is $\exists x.\forall y. P(x)\imp P(y)$.
lcp@104
   154
lcp@104
   155
Forgoing quantifier replication loses completeness, but gains decidability,
lcp@104
   156
since the search space becomes finite.  Many useful theorems can be proved
lcp@104
   157
without replication, and the search generally delivers its verdict in a
lcp@104
   158
reasonable time.  To adopt this approach, represent the sequent rules
lcp@104
   159
$(\exists R)$, $(\exists L)$ and $(\forall R)$ by $(\exists I)$, $(\exists
lcp@104
   160
E)$ and $(\forall I)$, respectively, and put $(\forall E)$ into elimination
lcp@104
   161
form:
lcp@104
   162
$$ \List{\forall x{.}P(x); P(t)\Imp Q} \Imp Q    \eqno(\forall E@2) $$
lcp@104
   163
Elim-resolution with this rule will delete the universal formula after a
lcp@104
   164
single use.  To replicate universal quantifiers, replace the rule by
lcp@104
   165
$$ \List{\forall x{.}P(x);\; \List{P(t); \forall x{.}P(x)}\Imp Q} \Imp Q.
lcp@104
   166
   \eqno(\forall E@3) $$
lcp@104
   167
To replicate existential quantifiers, replace $(\exists I)$ by
lcp@104
   168
\[ \Bigl(\neg(\exists x{.}P(x)) \Imp P(t)\Bigr) \Imp \exists x{.}P(x). \]
lcp@104
   169
All introduction rules mentioned above are also useful in swapped form.
lcp@104
   170
lcp@104
   171
Replication makes the search space infinite; we must apply the rules with
lcp@286
   172
care.  The classical reasoner distinguishes between safe and unsafe
lcp@104
   173
rules, applying the latter only when there is no alternative.  Depth-first
lcp@104
   174
search may well go down a blind alley; best-first search is better behaved
lcp@104
   175
in an infinite search space.  However, quantifier replication is too
lcp@104
   176
expensive to prove any but the simplest theorems.
lcp@104
   177
lcp@104
   178
lcp@104
   179
\section{Classical rule sets}
lcp@319
   180
\index{classical sets}
lcp@319
   181
Each automatic tactic takes a {\bf classical set} --- a collection of
lcp@104
   182
rules, classified as introduction or elimination and as {\bf safe} or {\bf
lcp@104
   183
unsafe}.  In general, safe rules can be attempted blindly, while unsafe
lcp@104
   184
rules must be used with care.  A safe rule must never reduce a provable
lcp@308
   185
goal to an unprovable set of subgoals.  
lcp@104
   186
lcp@308
   187
The rule~$({\disj}I1)$ is unsafe because it reduces $P\disj Q$ to~$P$.  Any
lcp@308
   188
rule is unsafe whose premises contain new unknowns.  The elimination
lcp@308
   189
rule~$(\forall E@2)$ is unsafe, since it is applied via elim-resolution,
lcp@308
   190
which discards the assumption $\forall x{.}P(x)$ and replaces it by the
lcp@308
   191
weaker assumption~$P(\Var{t})$.  The rule $({\exists}I)$ is unsafe for
lcp@308
   192
similar reasons.  The rule~$(\forall E@3)$ is unsafe in a different sense:
lcp@308
   193
since it keeps the assumption $\forall x{.}P(x)$, it is prone to looping.
lcp@308
   194
In classical first-order logic, all rules are safe except those mentioned
lcp@308
   195
above.
lcp@104
   196
lcp@104
   197
The safe/unsafe distinction is vague, and may be regarded merely as a way
lcp@104
   198
of giving some rules priority over others.  One could argue that
lcp@104
   199
$({\disj}E)$ is unsafe, because repeated application of it could generate
lcp@104
   200
exponentially many subgoals.  Induction rules are unsafe because inductive
lcp@104
   201
proofs are difficult to set up automatically.  Any inference is unsafe that
lcp@104
   202
instantiates an unknown in the proof state --- thus \ttindex{match_tac}
lcp@104
   203
must be used, rather than \ttindex{resolve_tac}.  Even proof by assumption
lcp@104
   204
is unsafe if it instantiates unknowns shared with other subgoals --- thus
lcp@104
   205
\ttindex{eq_assume_tac} must be used, rather than \ttindex{assume_tac}.
lcp@104
   206
lcp@319
   207
Classical rule sets belong to the abstract type \mltydx{claset}, which
lcp@286
   208
supports the following operations (provided the classical reasoner is
lcp@104
   209
installed!):
lcp@104
   210
\begin{ttbox} 
lcp@104
   211
empty_cs : claset
lcp@104
   212
addSIs   : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@104
   213
addSEs   : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@104
   214
addSDs   : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@104
   215
addIs    : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@104
   216
addEs    : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@104
   217
addDs    : claset * thm list -> claset                 \hfill{\bf infix 4}
lcp@104
   218
print_cs : claset -> unit
lcp@104
   219
\end{ttbox}
lcp@104
   220
There are no operations for deletion from a classical set.  The add
lcp@104
   221
operations do not check for repetitions.
lcp@308
   222
\begin{ttdescription}
lcp@104
   223
\item[\ttindexbold{empty_cs}] is the empty classical set.
lcp@104
   224
lcp@308
   225
\item[$cs$ addSIs $rules$] \indexbold{*addSIs}
lcp@308
   226
adds safe introduction~$rules$ to~$cs$.
lcp@104
   227
lcp@308
   228
\item[$cs$ addSEs $rules$] \indexbold{*addSEs}
lcp@308
   229
adds safe elimination~$rules$ to~$cs$.
lcp@104
   230
lcp@308
   231
\item[$cs$ addSDs $rules$] \indexbold{*addSDs}
lcp@308
   232
adds safe destruction~$rules$ to~$cs$.
lcp@104
   233
lcp@308
   234
\item[$cs$ addIs $rules$] \indexbold{*addIs}
lcp@308
   235
adds unsafe introduction~$rules$ to~$cs$.
lcp@104
   236
lcp@308
   237
\item[$cs$ addEs $rules$] \indexbold{*addEs}
lcp@308
   238
adds unsafe elimination~$rules$ to~$cs$.
lcp@104
   239
lcp@308
   240
\item[$cs$ addDs $rules$] \indexbold{*addDs}
lcp@308
   241
adds unsafe destruction~$rules$ to~$cs$.
lcp@104
   242
lcp@308
   243
\item[\ttindexbold{print_cs} $cs$] prints the rules of~$cs$.
lcp@308
   244
\end{ttdescription}
lcp@308
   245
lcp@104
   246
Introduction rules are those that can be applied using ordinary resolution.
lcp@104
   247
The classical set automatically generates their swapped forms, which will
lcp@104
   248
be applied using elim-resolution.  Elimination rules are applied using
lcp@286
   249
elim-resolution.  In a classical set, rules are sorted by the number of new
lcp@286
   250
subgoals they will yield; rules that generate the fewest subgoals will be
lcp@286
   251
tried first (see \S\ref{biresolve_tac}).
lcp@104
   252
lcp@104
   253
For a given classical set, the proof strategy is simple.  Perform as many
lcp@104
   254
safe inferences as possible; or else, apply certain safe rules, allowing
lcp@104
   255
instantiation of unknowns; or else, apply an unsafe rule.  The tactics may
lcp@319
   256
also apply {\tt hyp_subst_tac}, if they have been set up to do so (see
lcp@104
   257
below).  They may perform a form of Modus Ponens: if there are assumptions
lcp@104
   258
$P\imp Q$ and~$P$, then replace $P\imp Q$ by~$Q$.
lcp@104
   259
lcp@104
   260
lcp@104
   261
\section{The classical tactics}
lcp@319
   262
\index{classical reasoner!tactics}
lcp@104
   263
If installed, the classical module provides several tactics (and other
lcp@104
   264
operations) for simulating the classical sequent calculus.
lcp@104
   265
lcp@104
   266
\subsection{Single-step tactics}
lcp@104
   267
\begin{ttbox} 
lcp@104
   268
safe_step_tac : claset -> int -> tactic
lcp@104
   269
safe_tac      : claset        -> tactic
lcp@104
   270
inst_step_tac : claset -> int -> tactic
lcp@104
   271
step_tac      : claset -> int -> tactic
lcp@104
   272
slow_step_tac : claset -> int -> tactic
lcp@104
   273
\end{ttbox}
lcp@104
   274
The automatic proof procedures call these tactics.  By calling them
lcp@104
   275
yourself, you can execute these procedures one step at a time.
lcp@308
   276
\begin{ttdescription}
lcp@104
   277
\item[\ttindexbold{safe_step_tac} $cs$ $i$] performs a safe step on
lcp@319
   278
subgoal~$i$.  This may include proof by assumption or Modus Ponens (taking
lcp@319
   279
care not to instantiate unknowns), or {\tt hyp_subst_tac}.
lcp@104
   280
lcp@104
   281
\item[\ttindexbold{safe_tac} $cs$] repeatedly performs safe steps on all 
lcp@104
   282
subgoals.  It is deterministic, with at most one outcome.  If the automatic
lcp@104
   283
tactics fail, try using {\tt safe_tac} to open up your formula; then you
lcp@104
   284
can replicate certain quantifiers explicitly by applying appropriate rules.
lcp@104
   285
lcp@104
   286
\item[\ttindexbold{inst_step_tac} $cs$ $i$] is like {\tt safe_step_tac},
lcp@104
   287
but allows unknowns to be instantiated.
lcp@104
   288
lcp@104
   289
\item[\ttindexbold{step_tac} $cs$ $i$] tries {\tt safe_tac}.  IF this
lcp@104
   290
fails, it tries {\tt inst_step_tac}, or applies an unsafe rule from~$cs$.
lcp@104
   291
This is the basic step of the proof procedure.
lcp@104
   292
lcp@104
   293
\item[\ttindexbold{slow_step_tac}] 
lcp@104
   294
  resembles {\tt step_tac}, but allows backtracking between using safe
lcp@104
   295
  rules with instantiation ({\tt inst_step_tac}) and using unsafe rules.
lcp@104
   296
  The resulting search space is too large for use in the standard proof
lcp@104
   297
  procedures, but {\tt slow_step_tac} is worth considering in special
lcp@104
   298
  situations.
lcp@308
   299
\end{ttdescription}
lcp@104
   300
lcp@104
   301
lcp@104
   302
\subsection{The automatic tactics}
lcp@104
   303
\begin{ttbox} 
lcp@104
   304
fast_tac : claset -> int -> tactic
lcp@104
   305
best_tac : claset -> int -> tactic
lcp@104
   306
\end{ttbox}
lcp@104
   307
Both of these tactics work by applying {\tt step_tac} repeatedly.  Their
lcp@104
   308
effect is restricted (by {\tt SELECT_GOAL}) to one subgoal; they either
lcp@104
   309
solve this subgoal or fail.
lcp@308
   310
\begin{ttdescription}
lcp@104
   311
\item[\ttindexbold{fast_tac} $cs$ $i$] applies {\tt step_tac} using
lcp@104
   312
depth-first search, to solve subgoal~$i$.
lcp@104
   313
lcp@104
   314
\item[\ttindexbold{best_tac} $cs$ $i$] applies {\tt step_tac} using
lcp@104
   315
best-first search, to solve subgoal~$i$.  A heuristic function ---
lcp@104
   316
typically, the total size of the proof state --- guides the search.  This
lcp@286
   317
function is supplied when the classical reasoner is set up.
lcp@308
   318
\end{ttdescription}
lcp@104
   319
lcp@104
   320
lcp@104
   321
\subsection{Other useful tactics}
lcp@319
   322
\index{tactics!for contradiction}
lcp@319
   323
\index{tactics!for Modus Ponens}
lcp@104
   324
\begin{ttbox} 
lcp@104
   325
contr_tac    :             int -> tactic
lcp@104
   326
mp_tac       :             int -> tactic
lcp@104
   327
eq_mp_tac    :             int -> tactic
lcp@104
   328
swap_res_tac : thm list -> int -> tactic
lcp@104
   329
\end{ttbox}
lcp@104
   330
These can be used in the body of a specialized search.
lcp@308
   331
\begin{ttdescription}
lcp@319
   332
\item[\ttindexbold{contr_tac} {\it i}]\index{assumptions!contradictory}
lcp@319
   333
  solves subgoal~$i$ by detecting a contradiction among two assumptions of
lcp@319
   334
  the form $P$ and~$\neg P$, or fail.  It may instantiate unknowns.  The
lcp@319
   335
  tactic can produce multiple outcomes, enumerating all possible
lcp@319
   336
  contradictions.
lcp@104
   337
lcp@104
   338
\item[\ttindexbold{mp_tac} {\it i}] 
lcp@104
   339
is like {\tt contr_tac}, but also attempts to perform Modus Ponens in
lcp@104
   340
subgoal~$i$.  If there are assumptions $P\imp Q$ and~$P$, then it replaces
lcp@104
   341
$P\imp Q$ by~$Q$.  It may instantiate unknowns.  It fails if it can do
lcp@104
   342
nothing.
lcp@104
   343
lcp@104
   344
\item[\ttindexbold{eq_mp_tac} {\it i}] 
lcp@104
   345
is like {\tt mp_tac} {\it i}, but may not instantiate unknowns --- thus, it
lcp@104
   346
is safe.
lcp@104
   347
lcp@104
   348
\item[\ttindexbold{swap_res_tac} {\it thms} {\it i}] refines subgoal~$i$ of
lcp@104
   349
the proof state using {\it thms}, which should be a list of introduction
lcp@319
   350
rules.  First, it attempts to solve the goal using {\tt assume_tac} or
lcp@104
   351
{\tt contr_tac}.  It then attempts to apply each rule in turn, attempting
lcp@104
   352
resolution and also elim-resolution with the swapped form.
lcp@308
   353
\end{ttdescription}
lcp@104
   354
lcp@104
   355
\subsection{Creating swapped rules}
lcp@104
   356
\begin{ttbox} 
lcp@104
   357
swapify   : thm list -> thm list
lcp@104
   358
joinrules : thm list * thm list -> (bool * thm) list
lcp@104
   359
\end{ttbox}
lcp@308
   360
\begin{ttdescription}
lcp@104
   361
\item[\ttindexbold{swapify} {\it thms}] returns a list consisting of the
lcp@104
   362
swapped versions of~{\it thms}, regarded as introduction rules.
lcp@104
   363
lcp@308
   364
\item[\ttindexbold{joinrules} ({\it intrs}, {\it elims})]
lcp@104
   365
joins introduction rules, their swapped versions, and elimination rules for
lcp@104
   366
use with \ttindex{biresolve_tac}.  Each rule is paired with~{\tt false}
lcp@104
   367
(indicating ordinary resolution) or~{\tt true} (indicating
lcp@104
   368
elim-resolution).
lcp@308
   369
\end{ttdescription}
lcp@104
   370
lcp@104
   371
lcp@286
   372
\section{Setting up the classical reasoner}
lcp@319
   373
\index{classical reasoner!setting up}
lcp@104
   374
Isabelle's classical object-logics, including {\tt FOL} and {\tt HOL}, have
lcp@286
   375
the classical reasoner already set up.  When defining a new classical logic,
lcp@286
   376
you should set up the reasoner yourself.  It consists of the \ML{} functor
lcp@104
   377
\ttindex{ClassicalFun}, which takes the argument
lcp@319
   378
signature {\tt CLASSICAL_DATA}:
lcp@104
   379
\begin{ttbox} 
lcp@104
   380
signature CLASSICAL_DATA =
lcp@104
   381
  sig
lcp@104
   382
  val mp             : thm
lcp@104
   383
  val not_elim       : thm
lcp@104
   384
  val swap           : thm
lcp@104
   385
  val sizef          : thm -> int
lcp@104
   386
  val hyp_subst_tacs : (int -> tactic) list
lcp@104
   387
  end;
lcp@104
   388
\end{ttbox}
lcp@104
   389
Thus, the functor requires the following items:
lcp@308
   390
\begin{ttdescription}
lcp@319
   391
\item[\tdxbold{mp}] should be the Modus Ponens rule
lcp@104
   392
$\List{\Var{P}\imp\Var{Q};\; \Var{P}} \Imp \Var{Q}$.
lcp@104
   393
lcp@319
   394
\item[\tdxbold{not_elim}] should be the contradiction rule
lcp@104
   395
$\List{\neg\Var{P};\; \Var{P}} \Imp \Var{R}$.
lcp@104
   396
lcp@319
   397
\item[\tdxbold{swap}] should be the swap rule
lcp@104
   398
$\List{\neg \Var{P}; \; \neg \Var{R}\Imp \Var{P}} \Imp \Var{R}$.
lcp@104
   399
lcp@104
   400
\item[\ttindexbold{sizef}] is the heuristic function used for best-first
lcp@104
   401
search.  It should estimate the size of the remaining subgoals.  A good
lcp@104
   402
heuristic function is \ttindex{size_of_thm}, which measures the size of the
lcp@104
   403
proof state.  Another size function might ignore certain subgoals (say,
lcp@104
   404
those concerned with type checking).  A heuristic function might simply
lcp@104
   405
count the subgoals.
lcp@104
   406
lcp@319
   407
\item[\ttindexbold{hyp_subst_tacs}] is a list of tactics for substitution in
lcp@104
   408
the hypotheses, typically created by \ttindex{HypsubstFun} (see
lcp@104
   409
Chapter~\ref{substitution}).  This list can, of course, be empty.  The
lcp@104
   410
tactics are assumed to be safe!
lcp@308
   411
\end{ttdescription}
lcp@104
   412
The functor is not at all sensitive to the formalization of the
lcp@104
   413
object-logic.  It does not even examine the rules, but merely applies them
lcp@104
   414
according to its fixed strategy.  The functor resides in {\tt
lcp@319
   415
Provers/classical.ML} in the Isabelle distribution directory.
lcp@104
   416
lcp@319
   417
\index{classical reasoner|)}