doc-src/Ref/classical.tex
changeset 104 d8205bb279a7
child 286 e7efbf03562b
     1.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.2 +++ b/doc-src/Ref/classical.tex	Wed Nov 10 05:00:57 1993 +0100
     1.3 @@ -0,0 +1,408 @@
     1.4 +%% $Id$
     1.5 +\chapter{The classical theorem prover}
     1.6 +\index{classical prover|(}
     1.7 +\newcommand\ainfer[2]{\begin{array}{r@{}l}#2\\ \hline#1\end{array}}
     1.8 +Although Isabelle is generic, many users will be working in some extension
     1.9 +of classical first-order logic (FOL).  Isabelle's set theory is built upon
    1.10 +FOL, while higher-order logic contains FOL as a fragment.  Theorem-proving
    1.11 +in FOL is of course undecidable, but many researchers have developed
    1.12 +strategies to assist in this task.
    1.13 +
    1.14 +Isabelle's classical module is an \ML{} functor that accepts certain
    1.15 +information about a logic and delivers a suite of automatic tactics.  Each
    1.16 +tactic takes a collection of rules and executes a simple, non-clausal proof
    1.17 +procedure.  They are slow and simplistic compared with resolution theorem
    1.18 +provers, but they can save considerable time and effort.  They can prove
    1.19 +theorems such as Pelletier's~\cite{pelletier86} problems~40 and~41 in
    1.20 +seconds:
    1.21 +\[ (\exists y. \forall x. J(y,x) \bimp \neg J(x,x))  
    1.22 +   \imp  \neg (\forall x. \exists y. \forall z. J(z,y) \bimp \neg J(z,x)) \]
    1.23 +\[ (\forall z. \exists y. \forall x. F(x,y) \bimp F(x,z) \conj \neg F(x,x))
    1.24 +   \imp \neg (\exists z. \forall x. F(x,z))  
    1.25 +\]
    1.26 +The tactics are generic.  They are not restricted to~FOL, and have been
    1.27 +heavily used in the development of Isabelle's set theory.  Few interactive
    1.28 +proof assistants provide this much automation.  Isabelle does not record
    1.29 +proofs, but the tactics can be traced, and their components can be called
    1.30 +directly.  In this manner, any proof can be viewed interactively.
    1.31 +
    1.32 +The logics FOL, HOL and ZF have the classical prover already installed.  We
    1.33 +shall first consider how to use it, then see how to instantiate it for new
    1.34 +logics. 
    1.35 +
    1.36 +
    1.37 +\section{The sequent calculus}
    1.38 +\index{sequent calculus}
    1.39 +Isabelle supports natural deduction, which is easy to use for interactive
    1.40 +proof.  But natural deduction does not easily lend itself to automation,
    1.41 +and has a bias towards intuitionism.  For certain proofs in classical
    1.42 +logic, it can not be called natural.  The {\bf sequent calculus}, a
    1.43 +generalization of natural deduction, is easier to automate.
    1.44 +
    1.45 +A {\bf sequent} has the form $\Gamma\turn\Delta$, where $\Gamma$
    1.46 +and~$\Delta$ are sets of formulae.\footnote{For FOL, sequents can
    1.47 +equivalently be made from lists or multisets of formulae.}  The sequent
    1.48 +\[ P@1,\ldots,P@m\turn Q@1,\ldots,Q@n \]
    1.49 +is {\bf valid} if $P@1\conj\ldots\conj P@m$ implies $Q@1\disj\ldots\disj
    1.50 +Q@n$.  Thus $P@1,\ldots,P@m$ represent assumptions, each of which is true,
    1.51 +while $Q@1,\ldots,Q@n$ represent alternative goals.  A sequent is {\bf
    1.52 +basic} if its left and right sides have a common formula, as in $P,Q\turn
    1.53 +Q,R$; basic sequents are trivially valid.
    1.54 +
    1.55 +Sequent rules are classified as {\bf right} or {\bf left}, indicating which
    1.56 +side of the $\turn$~symbol they operate on.  Rules that operate on the
    1.57 +right side are analogous to natural deduction's introduction rules, and
    1.58 +left rules are analogous to elimination rules.  The sequent calculus
    1.59 +analogue of~$({\imp}I)$ is the rule
    1.60 +$$ \ainfer{\Gamma &\turn \Delta, P\imp Q}{P,\Gamma &\turn \Delta,Q}
    1.61 +   \eqno({\imp}R) $$
    1.62 +This breaks down some implication on the right side of a sequent; $\Gamma$
    1.63 +and $\Delta$ stand for the sets of formulae that are unaffected by the
    1.64 +inference.  The analogue of the pair~$({\disj}I1)$ and~$({\disj}I2)$ is the
    1.65 +single rule 
    1.66 +$$ \ainfer{\Gamma &\turn \Delta, P\disj Q}{\Gamma &\turn \Delta,P,Q}
    1.67 +   \eqno({\disj}R) $$
    1.68 +This breaks down some disjunction on the right side, replacing it by both
    1.69 +disjuncts.  Thus, the sequent calculus is a kind of multiple-conclusion logic.
    1.70 +
    1.71 +To illustrate the use of multiple formulae on the right, let us prove
    1.72 +the classical theorem $(P\imp Q)\disj(Q\imp P)$.  Working backwards, we
    1.73 +reduce this formula to a basic sequent:
    1.74 +\[ \infer[(\disj)R]{\turn(P\imp Q)\disj(Q\imp P)}
    1.75 +   {\infer[(\imp)R]{\turn(P\imp Q), (Q\imp P)\;}
    1.76 +    {\infer[(\imp)R]{P \turn Q, (Q\imp P)\qquad}
    1.77 +                    {P, Q \turn Q, P\qquad\qquad}}}
    1.78 +\]
    1.79 +This example is typical of the sequent calculus: start with the desired
    1.80 +theorem and apply rules backwards in a fairly arbitrary manner.  This yields a
    1.81 +surprisingly effective proof procedure.  Quantifiers add few complications,
    1.82 +since Isabelle handles parameters and schematic variables.  See Chapter~10
    1.83 +of {\em ML for the Working Programmer}~\cite{paulson91} for further
    1.84 +discussion.
    1.85 +
    1.86 +
    1.87 +\section{Simulating sequents by natural deduction}
    1.88 +Isabelle can represent sequents directly, as in the object-logic~{\sc lk}.
    1.89 +But natural deduction is easier to work with, and most object-logics employ
    1.90 +it.  Fortunately, we can simulate the sequent $P@1,\ldots,P@m\turn
    1.91 +Q@1,\ldots,Q@n$ by the Isabelle formula
    1.92 +\[ \List{P@1;\ldots;P@m; \neg Q@2;\ldots; \neg Q@n}\Imp Q@1, \]
    1.93 +where the order of the assumptions and the choice of~$Q@1$ are arbitrary.
    1.94 +Elim-resolution plays a key role in simulating sequent proofs.
    1.95 +
    1.96 +We can easily handle reasoning on the left.
    1.97 +As discussed in the {\em Introduction to Isabelle},
    1.98 +elim-resolution with the rules $(\disj E)$, $(\bot E)$ and $(\exists E)$
    1.99 +achieves a similar effect as the corresponding sequent rules.  For the
   1.100 +other connectives, we use sequent-style elimination rules instead of
   1.101 +destruction rules.  But note that the rule $(\neg L)$ has no effect
   1.102 +under our representation of sequents!
   1.103 +$$ \ainfer{\neg P,\Gamma &\turn \Delta}{\Gamma &\turn \Delta,P}
   1.104 +   \eqno({\neg}L) $$
   1.105 +What about reasoning on the right?  Introduction rules can only affect the
   1.106 +formula in the conclusion, namely~$Q@1$.  The other right-side formula are
   1.107 +represented as negated assumptions, $\neg Q@2$, \ldots,~$\neg Q@n$.  In
   1.108 +order to operate on one of these, it must first be exchanged with~$Q@1$.
   1.109 +Elim-resolution with the {\bf swap} rule has this effect:
   1.110 +$$ \List{\neg P; \; \neg R\Imp P} \Imp R   \eqno(swap)$$
   1.111 +To ensure that swaps occur only when necessary, each introduction rule is
   1.112 +converted into a swapped form: it is resolved with the second premise
   1.113 +of~$(swap)$.  The swapped form of~$({\conj}I)$, which might be
   1.114 +called~$({\neg\conj}E)$, is
   1.115 +\[ \List{\neg(P\conj Q); \; \neg R\Imp P; \; \neg R\Imp Q} \Imp R. \]
   1.116 +Similarly, the swapped form of~$({\imp}I)$ is
   1.117 +\[ \List{\neg(P\imp Q); \; \List{\neg R;P}\Imp Q} \Imp R  \]
   1.118 +Swapped introduction rules are applied using elim-resolution, which deletes
   1.119 +the negated formula.  Our representation of sequents also requires the use
   1.120 +of ordinary introduction rules.  If we had no regard for readability, we
   1.121 +could treat the right side more uniformly by representing sequents as
   1.122 +\[ \List{P@1;\ldots;P@m; \neg Q@1;\ldots; \neg Q@n}\Imp \bot. \]
   1.123 +
   1.124 +
   1.125 +\section{Extra rules for the sequent calculus}
   1.126 +As mentioned, destruction rules such as $({\conj}E1,2)$ and $(\forall E)$
   1.127 +must be replaced by sequent-style elimination rules.  In addition, we need
   1.128 +rules to embody the classical equivalence between $P\imp Q$ and $\neg P\disj
   1.129 +Q$.  The introduction rules~$({\disj}I1,2)$ are replaced by a rule that
   1.130 +simulates $({\disj}R)$:
   1.131 +\[ (\neg Q\Imp P) \Imp P\disj Q \]
   1.132 +The destruction rule $({\imp}E)$ is replaced by
   1.133 +\[ \List{P\imp Q;\; \neg P\imp R;\; Q\imp R} \Imp R. \]
   1.134 +Quantifier replication also requires special rules.  In classical logic,
   1.135 +$\exists x{.}P$ is equivalent to $\forall x{.}P$, and the rules $(\exists R)$
   1.136 +and $(\forall L)$ are dual:
   1.137 +\[ \ainfer{\Gamma &\turn \Delta, \exists x{.}P}
   1.138 +          {\Gamma &\turn \Delta, \exists x{.}P, P[t/x]} \; (\exists R)
   1.139 +   \qquad
   1.140 +   \ainfer{\forall x{.}P, \Gamma &\turn \Delta}
   1.141 +          {P[t/x], \forall x{.}P, \Gamma &\turn \Delta} \; (\forall L)
   1.142 +\]
   1.143 +Thus both kinds of quantifier may be replicated.  Theorems requiring
   1.144 +multiple uses of a universal formula are easy to invent; consider 
   1.145 +$(\forall x.P(x)\imp P(f(x))) \conj P(a) \imp P(f^n(x))$, for any~$n>1$.
   1.146 +Natural examples of the multiple use of an existential formula are rare; a
   1.147 +standard one is $\exists x.\forall y. P(x)\imp P(y)$.
   1.148 +
   1.149 +Forgoing quantifier replication loses completeness, but gains decidability,
   1.150 +since the search space becomes finite.  Many useful theorems can be proved
   1.151 +without replication, and the search generally delivers its verdict in a
   1.152 +reasonable time.  To adopt this approach, represent the sequent rules
   1.153 +$(\exists R)$, $(\exists L)$ and $(\forall R)$ by $(\exists I)$, $(\exists
   1.154 +E)$ and $(\forall I)$, respectively, and put $(\forall E)$ into elimination
   1.155 +form:
   1.156 +$$ \List{\forall x{.}P(x); P(t)\Imp Q} \Imp Q    \eqno(\forall E@2) $$
   1.157 +Elim-resolution with this rule will delete the universal formula after a
   1.158 +single use.  To replicate universal quantifiers, replace the rule by
   1.159 +$$ \List{\forall x{.}P(x);\; \List{P(t); \forall x{.}P(x)}\Imp Q} \Imp Q.
   1.160 +   \eqno(\forall E@3) $$
   1.161 +To replicate existential quantifiers, replace $(\exists I)$ by
   1.162 +\[ \Bigl(\neg(\exists x{.}P(x)) \Imp P(t)\Bigr) \Imp \exists x{.}P(x). \]
   1.163 +All introduction rules mentioned above are also useful in swapped form.
   1.164 +
   1.165 +Replication makes the search space infinite; we must apply the rules with
   1.166 +care.  The classical package distinguishes between safe and unsafe
   1.167 +rules, applying the latter only when there is no alternative.  Depth-first
   1.168 +search may well go down a blind alley; best-first search is better behaved
   1.169 +in an infinite search space.  However, quantifier replication is too
   1.170 +expensive to prove any but the simplest theorems.
   1.171 +
   1.172 +
   1.173 +\section{Classical rule sets}
   1.174 +\index{classical sets|bold}
   1.175 +Each automatic tactic takes a {\bf classical rule set} -- a collection of
   1.176 +rules, classified as introduction or elimination and as {\bf safe} or {\bf
   1.177 +unsafe}.  In general, safe rules can be attempted blindly, while unsafe
   1.178 +rules must be used with care.  A safe rule must never reduce a provable
   1.179 +goal to an unprovable set of subgoals.  The rule~$({\disj}I1)$ is obviously
   1.180 +unsafe, since it reduces $P\disj Q$ to~$P$; fortunately, we do not require
   1.181 +this rule.
   1.182 +
   1.183 +In general, any rule is unsafe whose premises contain new unknowns.  The
   1.184 +elimination rule~$(\forall E@2)$ is unsafe, since it is applied via
   1.185 +elim-resolution, which discards the assumption $\forall x{.}P(x)$ and
   1.186 +replaces it by the weaker assumption~$P(\Var{t})$.  The rule $({\exists}I)$
   1.187 +is unsafe for similar reasons.  The rule~$(\forall E@3)$ is unsafe in a
   1.188 +different sense: since it keeps the assumption $\forall x{.}P(x)$, it is
   1.189 +prone to looping.  In classical first-order logic, all rules are safe
   1.190 +except those mentioned above.
   1.191 +
   1.192 +The safe/unsafe distinction is vague, and may be regarded merely as a way
   1.193 +of giving some rules priority over others.  One could argue that
   1.194 +$({\disj}E)$ is unsafe, because repeated application of it could generate
   1.195 +exponentially many subgoals.  Induction rules are unsafe because inductive
   1.196 +proofs are difficult to set up automatically.  Any inference is unsafe that
   1.197 +instantiates an unknown in the proof state --- thus \ttindex{match_tac}
   1.198 +must be used, rather than \ttindex{resolve_tac}.  Even proof by assumption
   1.199 +is unsafe if it instantiates unknowns shared with other subgoals --- thus
   1.200 +\ttindex{eq_assume_tac} must be used, rather than \ttindex{assume_tac}.
   1.201 +
   1.202 +Classical rule sets belong to the abstract type \ttindexbold{claset}, which
   1.203 +supports the following operations (provided the classical prover is
   1.204 +installed!):
   1.205 +\begin{ttbox} 
   1.206 +empty_cs : claset
   1.207 +addSIs   : claset * thm list -> claset                 \hfill{\bf infix 4}
   1.208 +addSEs   : claset * thm list -> claset                 \hfill{\bf infix 4}
   1.209 +addSDs   : claset * thm list -> claset                 \hfill{\bf infix 4}
   1.210 +addIs    : claset * thm list -> claset                 \hfill{\bf infix 4}
   1.211 +addEs    : claset * thm list -> claset                 \hfill{\bf infix 4}
   1.212 +addDs    : claset * thm list -> claset                 \hfill{\bf infix 4}
   1.213 +print_cs : claset -> unit
   1.214 +\end{ttbox}
   1.215 +There are no operations for deletion from a classical set.  The add
   1.216 +operations do not check for repetitions.
   1.217 +\begin{description}
   1.218 +\item[\ttindexbold{empty_cs}] is the empty classical set.
   1.219 +
   1.220 +\item[\tt $cs$ addSIs $rules$] \indexbold{*addSIs}
   1.221 +adds some safe introduction~$rules$ to the classical set~$cs$.
   1.222 +
   1.223 +\item[\tt $cs$ addSEs $rules$] \indexbold{*addSEs}
   1.224 +adds some safe elimination~$rules$ to the classical set~$cs$.
   1.225 +
   1.226 +\item[\tt $cs$ addSDs $rules$] \indexbold{*addSDs}
   1.227 +adds some safe destruction~$rules$ to the classical set~$cs$.
   1.228 +
   1.229 +\item[\tt $cs$ addIs $rules$] \indexbold{*addIs}
   1.230 +adds some unsafe introduction~$rules$ to the classical set~$cs$.
   1.231 +
   1.232 +\item[\tt $cs$ addEs $rules$] \indexbold{*addEs}
   1.233 +adds some unsafe elimination~$rules$ to the classical set~$cs$.
   1.234 +
   1.235 +\item[\tt $cs$ addDs $rules$] \indexbold{*addDs}
   1.236 +adds some unsafe destruction~$rules$ to the classical set~$cs$.
   1.237 +
   1.238 +\item[\ttindexbold{print_cs} $cs$] prints the rules of the classical set~$cs$.
   1.239 +\end{description}
   1.240 +Introduction rules are those that can be applied using ordinary resolution.
   1.241 +The classical set automatically generates their swapped forms, which will
   1.242 +be applied using elim-resolution.  Elimination rules are applied using
   1.243 +elim-resolution.  In a classical set, rules are sorted the number of new
   1.244 +subgoals they will yield, so that rules that generate the fewest subgoals
   1.245 +will be tried first (see \S\ref{biresolve_tac}).
   1.246 +
   1.247 +For a given classical set, the proof strategy is simple.  Perform as many
   1.248 +safe inferences as possible; or else, apply certain safe rules, allowing
   1.249 +instantiation of unknowns; or else, apply an unsafe rule.  The tactics may
   1.250 +also apply \ttindex{hyp_subst_tac}, if they have been set up to do so (see
   1.251 +below).  They may perform a form of Modus Ponens: if there are assumptions
   1.252 +$P\imp Q$ and~$P$, then replace $P\imp Q$ by~$Q$.
   1.253 +
   1.254 +
   1.255 +\section{The classical tactics}
   1.256 +\index{classical prover!tactics|bold}
   1.257 +\index{tactics!for classical proof|bold}
   1.258 +If installed, the classical module provides several tactics (and other
   1.259 +operations) for simulating the classical sequent calculus.
   1.260 +
   1.261 +\subsection{Single-step tactics}
   1.262 +\begin{ttbox} 
   1.263 +safe_step_tac : claset -> int -> tactic
   1.264 +safe_tac      : claset        -> tactic
   1.265 +inst_step_tac : claset -> int -> tactic
   1.266 +step_tac      : claset -> int -> tactic
   1.267 +slow_step_tac : claset -> int -> tactic
   1.268 +\end{ttbox}
   1.269 +The automatic proof procedures call these tactics.  By calling them
   1.270 +yourself, you can execute these procedures one step at a time.
   1.271 +\begin{description}
   1.272 +\item[\ttindexbold{safe_step_tac} $cs$ $i$] performs a safe step on
   1.273 +subgoal~$i$.  This may include proof by assumption or Modus Ponens, taking
   1.274 +care not to instantiate unknowns, or \ttindex{hyp_subst_tac}.
   1.275 +
   1.276 +\item[\ttindexbold{safe_tac} $cs$] repeatedly performs safe steps on all 
   1.277 +subgoals.  It is deterministic, with at most one outcome.  If the automatic
   1.278 +tactics fail, try using {\tt safe_tac} to open up your formula; then you
   1.279 +can replicate certain quantifiers explicitly by applying appropriate rules.
   1.280 +
   1.281 +\item[\ttindexbold{inst_step_tac} $cs$ $i$] is like {\tt safe_step_tac},
   1.282 +but allows unknowns to be instantiated.
   1.283 +
   1.284 +\item[\ttindexbold{step_tac} $cs$ $i$] tries {\tt safe_tac}.  IF this
   1.285 +fails, it tries {\tt inst_step_tac}, or applies an unsafe rule from~$cs$.
   1.286 +This is the basic step of the proof procedure.
   1.287 +
   1.288 +\item[\ttindexbold{slow_step_tac}] 
   1.289 +  resembles {\tt step_tac}, but allows backtracking between using safe
   1.290 +  rules with instantiation ({\tt inst_step_tac}) and using unsafe rules.
   1.291 +  The resulting search space is too large for use in the standard proof
   1.292 +  procedures, but {\tt slow_step_tac} is worth considering in special
   1.293 +  situations.
   1.294 +\end{description}
   1.295 +
   1.296 +
   1.297 +\subsection{The automatic tactics}
   1.298 +\begin{ttbox} 
   1.299 +fast_tac : claset -> int -> tactic
   1.300 +best_tac : claset -> int -> tactic
   1.301 +\end{ttbox}
   1.302 +Both of these tactics work by applying {\tt step_tac} repeatedly.  Their
   1.303 +effect is restricted (by {\tt SELECT_GOAL}) to one subgoal; they either
   1.304 +solve this subgoal or fail.
   1.305 +\begin{description}
   1.306 +\item[\ttindexbold{fast_tac} $cs$ $i$] applies {\tt step_tac} using
   1.307 +depth-first search, to solve subgoal~$i$.
   1.308 +
   1.309 +\item[\ttindexbold{best_tac} $cs$ $i$] applies {\tt step_tac} using
   1.310 +best-first search, to solve subgoal~$i$.  A heuristic function ---
   1.311 +typically, the total size of the proof state --- guides the search.  This
   1.312 +function is supplied when the classical module is set up.
   1.313 +\end{description}
   1.314 +
   1.315 +
   1.316 +\subsection{Other useful tactics}
   1.317 +\index{tactics!for contradiction|bold}
   1.318 +\index{tactics!for Modus Ponens|bold}
   1.319 +\begin{ttbox} 
   1.320 +contr_tac    :             int -> tactic
   1.321 +mp_tac       :             int -> tactic
   1.322 +eq_mp_tac    :             int -> tactic
   1.323 +swap_res_tac : thm list -> int -> tactic
   1.324 +\end{ttbox}
   1.325 +These can be used in the body of a specialized search.
   1.326 +\begin{description}
   1.327 +\item[\ttindexbold{contr_tac} {\it i}] solves subgoal~$i$ by detecting a
   1.328 +contradiction among two assumptions of the form $P$ and~$\neg P$, or fail.
   1.329 +It may instantiate unknowns.  The tactic can produce multiple outcomes,
   1.330 +enumerating all possible contradictions.
   1.331 +
   1.332 +\item[\ttindexbold{mp_tac} {\it i}] 
   1.333 +is like {\tt contr_tac}, but also attempts to perform Modus Ponens in
   1.334 +subgoal~$i$.  If there are assumptions $P\imp Q$ and~$P$, then it replaces
   1.335 +$P\imp Q$ by~$Q$.  It may instantiate unknowns.  It fails if it can do
   1.336 +nothing.
   1.337 +
   1.338 +\item[\ttindexbold{eq_mp_tac} {\it i}] 
   1.339 +is like {\tt mp_tac} {\it i}, but may not instantiate unknowns --- thus, it
   1.340 +is safe.
   1.341 +
   1.342 +\item[\ttindexbold{swap_res_tac} {\it thms} {\it i}] refines subgoal~$i$ of
   1.343 +the proof state using {\it thms}, which should be a list of introduction
   1.344 +rules.  First, it attempts to solve the goal using \ttindex{assume_tac} or
   1.345 +{\tt contr_tac}.  It then attempts to apply each rule in turn, attempting
   1.346 +resolution and also elim-resolution with the swapped form.
   1.347 +\end{description}
   1.348 +
   1.349 +\subsection{Creating swapped rules}
   1.350 +\begin{ttbox} 
   1.351 +swapify   : thm list -> thm list
   1.352 +joinrules : thm list * thm list -> (bool * thm) list
   1.353 +\end{ttbox}
   1.354 +\begin{description}
   1.355 +\item[\ttindexbold{swapify} {\it thms}] returns a list consisting of the
   1.356 +swapped versions of~{\it thms}, regarded as introduction rules.
   1.357 +
   1.358 +\item[\ttindexbold{joinrules} \tt({\it intrs}, {\it elims})]
   1.359 +joins introduction rules, their swapped versions, and elimination rules for
   1.360 +use with \ttindex{biresolve_tac}.  Each rule is paired with~{\tt false}
   1.361 +(indicating ordinary resolution) or~{\tt true} (indicating
   1.362 +elim-resolution).
   1.363 +\end{description}
   1.364 +
   1.365 +
   1.366 +\section{Setting up the classical prover}
   1.367 +\index{classical prover!setting up|bold}
   1.368 +Isabelle's classical object-logics, including {\tt FOL} and {\tt HOL}, have
   1.369 +the classical prover already set up.  When defining a new classical logic,
   1.370 +you should set up the prover yourself.  It consists of the \ML{} functor
   1.371 +\ttindex{ClassicalFun}, which takes the argument
   1.372 +signature\indexbold{*CLASSICAL_DATA}
   1.373 +\begin{ttbox} 
   1.374 +signature CLASSICAL_DATA =
   1.375 +  sig
   1.376 +  val mp             : thm
   1.377 +  val not_elim       : thm
   1.378 +  val swap           : thm
   1.379 +  val sizef          : thm -> int
   1.380 +  val hyp_subst_tacs : (int -> tactic) list
   1.381 +  end;
   1.382 +\end{ttbox}
   1.383 +Thus, the functor requires the following items:
   1.384 +\begin{description}
   1.385 +\item[\ttindexbold{mp}] should be the Modus Ponens rule
   1.386 +$\List{\Var{P}\imp\Var{Q};\; \Var{P}} \Imp \Var{Q}$.
   1.387 +
   1.388 +\item[\ttindexbold{not_elim}] should be the contradiction rule
   1.389 +$\List{\neg\Var{P};\; \Var{P}} \Imp \Var{R}$.
   1.390 +
   1.391 +\item[\ttindexbold{swap}] should be the swap rule
   1.392 +$\List{\neg \Var{P}; \; \neg \Var{R}\Imp \Var{P}} \Imp \Var{R}$.
   1.393 +
   1.394 +\item[\ttindexbold{sizef}] is the heuristic function used for best-first
   1.395 +search.  It should estimate the size of the remaining subgoals.  A good
   1.396 +heuristic function is \ttindex{size_of_thm}, which measures the size of the
   1.397 +proof state.  Another size function might ignore certain subgoals (say,
   1.398 +those concerned with type checking).  A heuristic function might simply
   1.399 +count the subgoals.
   1.400 +
   1.401 +\item[\ttindexbold{hyp_subst_tac}] is a list of tactics for substitution in
   1.402 +the hypotheses, typically created by \ttindex{HypsubstFun} (see
   1.403 +Chapter~\ref{substitution}).  This list can, of course, be empty.  The
   1.404 +tactics are assumed to be safe!
   1.405 +\end{description}
   1.406 +The functor is not at all sensitive to the formalization of the
   1.407 +object-logic.  It does not even examine the rules, but merely applies them
   1.408 +according to its fixed strategy.  The functor resides in {\tt
   1.409 +Provers/classical.ML} on the Isabelle distribution directory.
   1.410 +
   1.411 +\index{classical prover|)}