lcp@104
|
1 |
%% $Id$
|
lcp@319
|
2 |
\chapter{The Classical Reasoner}\label{chap:classical}
|
lcp@286
|
3 |
\index{classical reasoner|(}
|
lcp@308
|
4 |
\newcommand\ainfer[2]{\begin{array}{r@{\,}l}#2\\ \hline#1\end{array}}
|
lcp@308
|
5 |
|
wenzelm@3108
|
6 |
Although Isabelle is generic, many users will be working in some
|
wenzelm@3108
|
7 |
extension of classical first-order logic. Isabelle's set theory~{\tt
|
wenzelm@4398
|
8 |
ZF} is built upon theory~\texttt{FOL}, while {\HOL}
|
wenzelm@3108
|
9 |
conceptually contains first-order logic as a fragment.
|
wenzelm@3108
|
10 |
Theorem-proving in predicate logic is undecidable, but many
|
lcp@308
|
11 |
researchers have developed strategies to assist in this task.
|
lcp@104
|
12 |
|
lcp@286
|
13 |
Isabelle's classical reasoner is an \ML{} functor that accepts certain
|
lcp@104
|
14 |
information about a logic and delivers a suite of automatic tactics. Each
|
lcp@104
|
15 |
tactic takes a collection of rules and executes a simple, non-clausal proof
|
lcp@104
|
16 |
procedure. They are slow and simplistic compared with resolution theorem
|
lcp@104
|
17 |
provers, but they can save considerable time and effort. They can prove
|
lcp@104
|
18 |
theorems such as Pelletier's~\cite{pelletier86} problems~40 and~41 in
|
lcp@104
|
19 |
seconds:
|
lcp@104
|
20 |
\[ (\exists y. \forall x. J(y,x) \bimp \neg J(x,x))
|
lcp@104
|
21 |
\imp \neg (\forall x. \exists y. \forall z. J(z,y) \bimp \neg J(z,x)) \]
|
lcp@104
|
22 |
\[ (\forall z. \exists y. \forall x. F(x,y) \bimp F(x,z) \conj \neg F(x,x))
|
lcp@104
|
23 |
\imp \neg (\exists z. \forall x. F(x,z))
|
lcp@104
|
24 |
\]
|
lcp@308
|
25 |
%
|
lcp@308
|
26 |
The tactics are generic. They are not restricted to first-order logic, and
|
lcp@308
|
27 |
have been heavily used in the development of Isabelle's set theory. Few
|
lcp@308
|
28 |
interactive proof assistants provide this much automation. The tactics can
|
lcp@308
|
29 |
be traced, and their components can be called directly; in this manner,
|
lcp@308
|
30 |
any proof can be viewed interactively.
|
lcp@104
|
31 |
|
paulson@3716
|
32 |
The simplest way to apply the classical reasoner (to subgoal~$i$) is to type
|
paulson@2479
|
33 |
\begin{ttbox}
|
paulson@3089
|
34 |
by (Blast_tac \(i\));
|
paulson@2479
|
35 |
\end{ttbox}
|
paulson@3716
|
36 |
This command quickly proves most simple formulas of the predicate calculus or
|
paulson@3716
|
37 |
set theory. To attempt to prove \emph{all} subgoals using a combination of
|
paulson@3716
|
38 |
rewriting and classical reasoning, try
|
paulson@3224
|
39 |
\begin{ttbox}
|
paulson@4507
|
40 |
by Auto_tac;
|
paulson@3224
|
41 |
\end{ttbox}
|
paulson@3716
|
42 |
To do all obvious logical steps, even if they do not prove the
|
paulson@3720
|
43 |
subgoal, type one of the following:
|
paulson@3716
|
44 |
\begin{ttbox}
|
paulson@3720
|
45 |
by (Clarify_tac \(i\)); \emph{\textrm{applies to one subgoal}}
|
paulson@3720
|
46 |
by Safe_tac; \emph{\textrm{applies to all subgoals}}
|
paulson@3716
|
47 |
\end{ttbox}
|
paulson@3716
|
48 |
You need to know how the classical reasoner works in order to use it
|
paulson@3716
|
49 |
effectively. There are many tactics to choose from, including {\tt
|
paulson@3720
|
50 |
Fast_tac} and \texttt{Best_tac}.
|
paulson@2479
|
51 |
|
wenzelm@3108
|
52 |
We shall first discuss the underlying principles, then present the
|
wenzelm@3108
|
53 |
classical reasoner. Finally, we shall see how to instantiate it for
|
wenzelm@3108
|
54 |
new logics. The logics \FOL, \ZF, {\HOL} and {\HOLCF} have it already
|
wenzelm@3108
|
55 |
installed.
|
lcp@104
|
56 |
|
lcp@104
|
57 |
|
lcp@104
|
58 |
\section{The sequent calculus}
|
lcp@104
|
59 |
\index{sequent calculus}
|
lcp@104
|
60 |
Isabelle supports natural deduction, which is easy to use for interactive
|
lcp@104
|
61 |
proof. But natural deduction does not easily lend itself to automation,
|
lcp@104
|
62 |
and has a bias towards intuitionism. For certain proofs in classical
|
lcp@104
|
63 |
logic, it can not be called natural. The {\bf sequent calculus}, a
|
lcp@104
|
64 |
generalization of natural deduction, is easier to automate.
|
lcp@104
|
65 |
|
lcp@104
|
66 |
A {\bf sequent} has the form $\Gamma\turn\Delta$, where $\Gamma$
|
lcp@308
|
67 |
and~$\Delta$ are sets of formulae.%
|
lcp@308
|
68 |
\footnote{For first-order logic, sequents can equivalently be made from
|
lcp@308
|
69 |
lists or multisets of formulae.} The sequent
|
lcp@104
|
70 |
\[ P@1,\ldots,P@m\turn Q@1,\ldots,Q@n \]
|
lcp@104
|
71 |
is {\bf valid} if $P@1\conj\ldots\conj P@m$ implies $Q@1\disj\ldots\disj
|
lcp@104
|
72 |
Q@n$. Thus $P@1,\ldots,P@m$ represent assumptions, each of which is true,
|
lcp@104
|
73 |
while $Q@1,\ldots,Q@n$ represent alternative goals. A sequent is {\bf
|
lcp@104
|
74 |
basic} if its left and right sides have a common formula, as in $P,Q\turn
|
lcp@104
|
75 |
Q,R$; basic sequents are trivially valid.
|
lcp@104
|
76 |
|
lcp@104
|
77 |
Sequent rules are classified as {\bf right} or {\bf left}, indicating which
|
lcp@104
|
78 |
side of the $\turn$~symbol they operate on. Rules that operate on the
|
lcp@104
|
79 |
right side are analogous to natural deduction's introduction rules, and
|
lcp@308
|
80 |
left rules are analogous to elimination rules.
|
lcp@308
|
81 |
Recall the natural deduction rules for
|
lcp@308
|
82 |
first-order logic,
|
lcp@308
|
83 |
\iflabelundefined{fol-fig}{from {\it Introduction to Isabelle}}%
|
lcp@308
|
84 |
{Fig.\ts\ref{fol-fig}}.
|
lcp@308
|
85 |
The sequent calculus analogue of~$({\imp}I)$ is the rule
|
wenzelm@3108
|
86 |
$$
|
wenzelm@3108
|
87 |
\ainfer{\Gamma &\turn \Delta, P\imp Q}{P,\Gamma &\turn \Delta,Q}
|
wenzelm@3108
|
88 |
\eqno({\imp}R)
|
wenzelm@3108
|
89 |
$$
|
lcp@104
|
90 |
This breaks down some implication on the right side of a sequent; $\Gamma$
|
lcp@104
|
91 |
and $\Delta$ stand for the sets of formulae that are unaffected by the
|
lcp@104
|
92 |
inference. The analogue of the pair~$({\disj}I1)$ and~$({\disj}I2)$ is the
|
lcp@104
|
93 |
single rule
|
wenzelm@3108
|
94 |
$$
|
wenzelm@3108
|
95 |
\ainfer{\Gamma &\turn \Delta, P\disj Q}{\Gamma &\turn \Delta,P,Q}
|
wenzelm@3108
|
96 |
\eqno({\disj}R)
|
wenzelm@3108
|
97 |
$$
|
lcp@104
|
98 |
This breaks down some disjunction on the right side, replacing it by both
|
lcp@104
|
99 |
disjuncts. Thus, the sequent calculus is a kind of multiple-conclusion logic.
|
lcp@104
|
100 |
|
lcp@104
|
101 |
To illustrate the use of multiple formulae on the right, let us prove
|
lcp@104
|
102 |
the classical theorem $(P\imp Q)\disj(Q\imp P)$. Working backwards, we
|
lcp@104
|
103 |
reduce this formula to a basic sequent:
|
lcp@104
|
104 |
\[ \infer[(\disj)R]{\turn(P\imp Q)\disj(Q\imp P)}
|
lcp@104
|
105 |
{\infer[(\imp)R]{\turn(P\imp Q), (Q\imp P)\;}
|
lcp@104
|
106 |
{\infer[(\imp)R]{P \turn Q, (Q\imp P)\qquad}
|
lcp@104
|
107 |
{P, Q \turn Q, P\qquad\qquad}}}
|
lcp@104
|
108 |
\]
|
lcp@104
|
109 |
This example is typical of the sequent calculus: start with the desired
|
lcp@104
|
110 |
theorem and apply rules backwards in a fairly arbitrary manner. This yields a
|
lcp@104
|
111 |
surprisingly effective proof procedure. Quantifiers add few complications,
|
lcp@104
|
112 |
since Isabelle handles parameters and schematic variables. See Chapter~10
|
lcp@104
|
113 |
of {\em ML for the Working Programmer}~\cite{paulson91} for further
|
lcp@104
|
114 |
discussion.
|
lcp@104
|
115 |
|
lcp@104
|
116 |
|
lcp@104
|
117 |
\section{Simulating sequents by natural deduction}
|
paulson@3720
|
118 |
Isabelle can represent sequents directly, as in the object-logic~\texttt{LK}\@.
|
lcp@104
|
119 |
But natural deduction is easier to work with, and most object-logics employ
|
lcp@104
|
120 |
it. Fortunately, we can simulate the sequent $P@1,\ldots,P@m\turn
|
lcp@104
|
121 |
Q@1,\ldots,Q@n$ by the Isabelle formula
|
lcp@104
|
122 |
\[ \List{P@1;\ldots;P@m; \neg Q@2;\ldots; \neg Q@n}\Imp Q@1, \]
|
lcp@104
|
123 |
where the order of the assumptions and the choice of~$Q@1$ are arbitrary.
|
lcp@104
|
124 |
Elim-resolution plays a key role in simulating sequent proofs.
|
lcp@104
|
125 |
|
lcp@104
|
126 |
We can easily handle reasoning on the left.
|
lcp@308
|
127 |
As discussed in
|
lcp@308
|
128 |
\iflabelundefined{destruct}{{\it Introduction to Isabelle}}{\S\ref{destruct}},
|
lcp@104
|
129 |
elim-resolution with the rules $(\disj E)$, $(\bot E)$ and $(\exists E)$
|
lcp@104
|
130 |
achieves a similar effect as the corresponding sequent rules. For the
|
lcp@104
|
131 |
other connectives, we use sequent-style elimination rules instead of
|
lcp@308
|
132 |
destruction rules such as $({\conj}E1,2)$ and $(\forall E)$. But note that
|
lcp@308
|
133 |
the rule $(\neg L)$ has no effect under our representation of sequents!
|
wenzelm@3108
|
134 |
$$
|
wenzelm@3108
|
135 |
\ainfer{\neg P,\Gamma &\turn \Delta}{\Gamma &\turn \Delta,P}\eqno({\neg}L)
|
wenzelm@3108
|
136 |
$$
|
lcp@104
|
137 |
What about reasoning on the right? Introduction rules can only affect the
|
lcp@308
|
138 |
formula in the conclusion, namely~$Q@1$. The other right-side formulae are
|
lcp@319
|
139 |
represented as negated assumptions, $\neg Q@2$, \ldots,~$\neg Q@n$.
|
lcp@319
|
140 |
\index{assumptions!negated}
|
lcp@319
|
141 |
In order to operate on one of these, it must first be exchanged with~$Q@1$.
|
lcp@104
|
142 |
Elim-resolution with the {\bf swap} rule has this effect:
|
wenzelm@3108
|
143 |
$$ \List{\neg P; \; \neg R\Imp P} \Imp R \eqno(swap) $$
|
lcp@104
|
144 |
To ensure that swaps occur only when necessary, each introduction rule is
|
lcp@104
|
145 |
converted into a swapped form: it is resolved with the second premise
|
lcp@104
|
146 |
of~$(swap)$. The swapped form of~$({\conj}I)$, which might be
|
lcp@104
|
147 |
called~$({\neg\conj}E)$, is
|
lcp@104
|
148 |
\[ \List{\neg(P\conj Q); \; \neg R\Imp P; \; \neg R\Imp Q} \Imp R. \]
|
lcp@104
|
149 |
Similarly, the swapped form of~$({\imp}I)$ is
|
lcp@104
|
150 |
\[ \List{\neg(P\imp Q); \; \List{\neg R;P}\Imp Q} \Imp R \]
|
lcp@104
|
151 |
Swapped introduction rules are applied using elim-resolution, which deletes
|
lcp@104
|
152 |
the negated formula. Our representation of sequents also requires the use
|
lcp@104
|
153 |
of ordinary introduction rules. If we had no regard for readability, we
|
lcp@104
|
154 |
could treat the right side more uniformly by representing sequents as
|
lcp@104
|
155 |
\[ \List{P@1;\ldots;P@m; \neg Q@1;\ldots; \neg Q@n}\Imp \bot. \]
|
lcp@104
|
156 |
|
lcp@104
|
157 |
|
lcp@104
|
158 |
\section{Extra rules for the sequent calculus}
|
lcp@104
|
159 |
As mentioned, destruction rules such as $({\conj}E1,2)$ and $(\forall E)$
|
lcp@104
|
160 |
must be replaced by sequent-style elimination rules. In addition, we need
|
lcp@104
|
161 |
rules to embody the classical equivalence between $P\imp Q$ and $\neg P\disj
|
lcp@104
|
162 |
Q$. The introduction rules~$({\disj}I1,2)$ are replaced by a rule that
|
lcp@104
|
163 |
simulates $({\disj}R)$:
|
lcp@104
|
164 |
\[ (\neg Q\Imp P) \Imp P\disj Q \]
|
lcp@104
|
165 |
The destruction rule $({\imp}E)$ is replaced by
|
lcp@332
|
166 |
\[ \List{P\imp Q;\; \neg P\Imp R;\; Q\Imp R} \Imp R. \]
|
lcp@104
|
167 |
Quantifier replication also requires special rules. In classical logic,
|
lcp@308
|
168 |
$\exists x{.}P$ is equivalent to $\neg\forall x{.}\neg P$; the rules
|
lcp@308
|
169 |
$(\exists R)$ and $(\forall L)$ are dual:
|
lcp@104
|
170 |
\[ \ainfer{\Gamma &\turn \Delta, \exists x{.}P}
|
lcp@104
|
171 |
{\Gamma &\turn \Delta, \exists x{.}P, P[t/x]} \; (\exists R)
|
lcp@104
|
172 |
\qquad
|
lcp@104
|
173 |
\ainfer{\forall x{.}P, \Gamma &\turn \Delta}
|
lcp@104
|
174 |
{P[t/x], \forall x{.}P, \Gamma &\turn \Delta} \; (\forall L)
|
lcp@104
|
175 |
\]
|
lcp@104
|
176 |
Thus both kinds of quantifier may be replicated. Theorems requiring
|
lcp@104
|
177 |
multiple uses of a universal formula are easy to invent; consider
|
lcp@308
|
178 |
\[ (\forall x.P(x)\imp P(f(x))) \conj P(a) \imp P(f^n(a)), \]
|
lcp@308
|
179 |
for any~$n>1$. Natural examples of the multiple use of an existential
|
lcp@308
|
180 |
formula are rare; a standard one is $\exists x.\forall y. P(x)\imp P(y)$.
|
lcp@104
|
181 |
|
lcp@104
|
182 |
Forgoing quantifier replication loses completeness, but gains decidability,
|
lcp@104
|
183 |
since the search space becomes finite. Many useful theorems can be proved
|
lcp@104
|
184 |
without replication, and the search generally delivers its verdict in a
|
lcp@104
|
185 |
reasonable time. To adopt this approach, represent the sequent rules
|
lcp@104
|
186 |
$(\exists R)$, $(\exists L)$ and $(\forall R)$ by $(\exists I)$, $(\exists
|
lcp@104
|
187 |
E)$ and $(\forall I)$, respectively, and put $(\forall E)$ into elimination
|
lcp@104
|
188 |
form:
|
lcp@104
|
189 |
$$ \List{\forall x{.}P(x); P(t)\Imp Q} \Imp Q \eqno(\forall E@2) $$
|
lcp@104
|
190 |
Elim-resolution with this rule will delete the universal formula after a
|
lcp@104
|
191 |
single use. To replicate universal quantifiers, replace the rule by
|
wenzelm@3108
|
192 |
$$
|
wenzelm@3108
|
193 |
\List{\forall x{.}P(x);\; \List{P(t); \forall x{.}P(x)}\Imp Q} \Imp Q.
|
wenzelm@3108
|
194 |
\eqno(\forall E@3)
|
wenzelm@3108
|
195 |
$$
|
lcp@104
|
196 |
To replicate existential quantifiers, replace $(\exists I)$ by
|
lcp@332
|
197 |
\[ \List{\neg(\exists x{.}P(x)) \Imp P(t)} \Imp \exists x{.}P(x). \]
|
lcp@104
|
198 |
All introduction rules mentioned above are also useful in swapped form.
|
lcp@104
|
199 |
|
lcp@104
|
200 |
Replication makes the search space infinite; we must apply the rules with
|
lcp@286
|
201 |
care. The classical reasoner distinguishes between safe and unsafe
|
lcp@104
|
202 |
rules, applying the latter only when there is no alternative. Depth-first
|
lcp@104
|
203 |
search may well go down a blind alley; best-first search is better behaved
|
lcp@104
|
204 |
in an infinite search space. However, quantifier replication is too
|
lcp@104
|
205 |
expensive to prove any but the simplest theorems.
|
lcp@104
|
206 |
|
lcp@104
|
207 |
|
lcp@104
|
208 |
\section{Classical rule sets}
|
lcp@319
|
209 |
\index{classical sets}
|
lcp@319
|
210 |
Each automatic tactic takes a {\bf classical set} --- a collection of
|
lcp@104
|
211 |
rules, classified as introduction or elimination and as {\bf safe} or {\bf
|
lcp@104
|
212 |
unsafe}. In general, safe rules can be attempted blindly, while unsafe
|
lcp@104
|
213 |
rules must be used with care. A safe rule must never reduce a provable
|
lcp@308
|
214 |
goal to an unprovable set of subgoals.
|
lcp@104
|
215 |
|
lcp@308
|
216 |
The rule~$({\disj}I1)$ is unsafe because it reduces $P\disj Q$ to~$P$. Any
|
lcp@308
|
217 |
rule is unsafe whose premises contain new unknowns. The elimination
|
lcp@308
|
218 |
rule~$(\forall E@2)$ is unsafe, since it is applied via elim-resolution,
|
lcp@308
|
219 |
which discards the assumption $\forall x{.}P(x)$ and replaces it by the
|
lcp@308
|
220 |
weaker assumption~$P(\Var{t})$. The rule $({\exists}I)$ is unsafe for
|
lcp@308
|
221 |
similar reasons. The rule~$(\forall E@3)$ is unsafe in a different sense:
|
lcp@308
|
222 |
since it keeps the assumption $\forall x{.}P(x)$, it is prone to looping.
|
lcp@308
|
223 |
In classical first-order logic, all rules are safe except those mentioned
|
lcp@308
|
224 |
above.
|
lcp@104
|
225 |
|
lcp@104
|
226 |
The safe/unsafe distinction is vague, and may be regarded merely as a way
|
lcp@104
|
227 |
of giving some rules priority over others. One could argue that
|
lcp@104
|
228 |
$({\disj}E)$ is unsafe, because repeated application of it could generate
|
lcp@104
|
229 |
exponentially many subgoals. Induction rules are unsafe because inductive
|
lcp@104
|
230 |
proofs are difficult to set up automatically. Any inference is unsafe that
|
lcp@104
|
231 |
instantiates an unknown in the proof state --- thus \ttindex{match_tac}
|
lcp@104
|
232 |
must be used, rather than \ttindex{resolve_tac}. Even proof by assumption
|
lcp@104
|
233 |
is unsafe if it instantiates unknowns shared with other subgoals --- thus
|
lcp@104
|
234 |
\ttindex{eq_assume_tac} must be used, rather than \ttindex{assume_tac}.
|
lcp@104
|
235 |
|
lcp@1099
|
236 |
\subsection{Adding rules to classical sets}
|
lcp@319
|
237 |
Classical rule sets belong to the abstract type \mltydx{claset}, which
|
lcp@286
|
238 |
supports the following operations (provided the classical reasoner is
|
lcp@104
|
239 |
installed!):
|
lcp@104
|
240 |
\begin{ttbox}
|
lcp@1099
|
241 |
empty_cs : claset
|
lcp@1099
|
242 |
print_cs : claset -> unit
|
oheimb@4665
|
243 |
rep_cs : claset -> {safeEs: thm list, safeIs: thm list,
|
oheimb@4665
|
244 |
hazEs: thm list, hazIs: thm list,
|
oheimb@4665
|
245 |
swrappers: (string * wrapper) list,
|
oheimb@4665
|
246 |
uwrappers: (string * wrapper) list,
|
oheimb@4665
|
247 |
safe0_netpair: netpair, safep_netpair: netpair,
|
oheimb@4665
|
248 |
haz_netpair: netpair, dup_netpair: netpair}
|
lcp@1099
|
249 |
addSIs : claset * thm list -> claset \hfill{\bf infix 4}
|
lcp@1099
|
250 |
addSEs : claset * thm list -> claset \hfill{\bf infix 4}
|
lcp@1099
|
251 |
addSDs : claset * thm list -> claset \hfill{\bf infix 4}
|
lcp@1099
|
252 |
addIs : claset * thm list -> claset \hfill{\bf infix 4}
|
lcp@1099
|
253 |
addEs : claset * thm list -> claset \hfill{\bf infix 4}
|
lcp@1099
|
254 |
addDs : claset * thm list -> claset \hfill{\bf infix 4}
|
berghofe@1869
|
255 |
delrules : claset * thm list -> claset \hfill{\bf infix 4}
|
lcp@104
|
256 |
\end{ttbox}
|
paulson@3089
|
257 |
The add operations ignore any rule already present in the claset with the same
|
paulson@3089
|
258 |
classification (such as Safe Introduction). They print a warning if the rule
|
paulson@3089
|
259 |
has already been added with some other classification, but add the rule
|
paulson@3720
|
260 |
anyway. Calling \texttt{delrules} deletes all occurrences of a rule from the
|
paulson@3089
|
261 |
claset, but see the warning below concerning destruction rules.
|
lcp@308
|
262 |
\begin{ttdescription}
|
lcp@104
|
263 |
\item[\ttindexbold{empty_cs}] is the empty classical set.
|
lcp@104
|
264 |
|
oheimb@4665
|
265 |
\item[\ttindexbold{print_cs} $cs$] displays the printable contents of~$cs$,
|
oheimb@4665
|
266 |
which is the rules. All other parts are non-printable.
|
oheimb@4665
|
267 |
|
oheimb@4665
|
268 |
\item[\ttindexbold{rep_cs} $cs$] decomposes $cs$ as a record of its internal
|
oheimb@4666
|
269 |
components, namely the safe introduction and elimination rules, the unsafe
|
oheimb@4666
|
270 |
introduction and elimination rules, the lists of safe and unsafe wrappers
|
oheimb@4666
|
271 |
(see \ref{sec:modifying-search}), and the internalized forms of the rules.
|
lcp@1099
|
272 |
|
lcp@308
|
273 |
\item[$cs$ addSIs $rules$] \indexbold{*addSIs}
|
lcp@308
|
274 |
adds safe introduction~$rules$ to~$cs$.
|
lcp@104
|
275 |
|
lcp@308
|
276 |
\item[$cs$ addSEs $rules$] \indexbold{*addSEs}
|
lcp@308
|
277 |
adds safe elimination~$rules$ to~$cs$.
|
lcp@104
|
278 |
|
lcp@308
|
279 |
\item[$cs$ addSDs $rules$] \indexbold{*addSDs}
|
lcp@308
|
280 |
adds safe destruction~$rules$ to~$cs$.
|
lcp@104
|
281 |
|
lcp@308
|
282 |
\item[$cs$ addIs $rules$] \indexbold{*addIs}
|
lcp@308
|
283 |
adds unsafe introduction~$rules$ to~$cs$.
|
lcp@104
|
284 |
|
lcp@308
|
285 |
\item[$cs$ addEs $rules$] \indexbold{*addEs}
|
lcp@308
|
286 |
adds unsafe elimination~$rules$ to~$cs$.
|
lcp@104
|
287 |
|
lcp@308
|
288 |
\item[$cs$ addDs $rules$] \indexbold{*addDs}
|
lcp@308
|
289 |
adds unsafe destruction~$rules$ to~$cs$.
|
berghofe@1869
|
290 |
|
berghofe@1869
|
291 |
\item[$cs$ delrules $rules$] \indexbold{*delrules}
|
paulson@3089
|
292 |
deletes~$rules$ from~$cs$. It prints a warning for those rules that are not
|
paulson@3089
|
293 |
in~$cs$.
|
lcp@308
|
294 |
\end{ttdescription}
|
lcp@308
|
295 |
|
paulson@3089
|
296 |
\begin{warn}
|
paulson@3720
|
297 |
If you added $rule$ using \texttt{addSDs} or \texttt{addDs}, then you must delete
|
paulson@3089
|
298 |
it as follows:
|
paulson@3089
|
299 |
\begin{ttbox}
|
paulson@3089
|
300 |
\(cs\) delrules [make_elim \(rule\)]
|
paulson@3089
|
301 |
\end{ttbox}
|
paulson@3089
|
302 |
\par\noindent
|
paulson@3720
|
303 |
This is necessary because the operators \texttt{addSDs} and \texttt{addDs} convert
|
paulson@3089
|
304 |
the destruction rules to elimination rules by applying \ttindex{make_elim},
|
paulson@3720
|
305 |
and then insert them using \texttt{addSEs} and \texttt{addEs}, respectively.
|
paulson@3089
|
306 |
\end{warn}
|
paulson@3089
|
307 |
|
lcp@104
|
308 |
Introduction rules are those that can be applied using ordinary resolution.
|
lcp@104
|
309 |
The classical set automatically generates their swapped forms, which will
|
lcp@104
|
310 |
be applied using elim-resolution. Elimination rules are applied using
|
lcp@286
|
311 |
elim-resolution. In a classical set, rules are sorted by the number of new
|
lcp@286
|
312 |
subgoals they will yield; rules that generate the fewest subgoals will be
|
lcp@286
|
313 |
tried first (see \S\ref{biresolve_tac}).
|
lcp@104
|
314 |
|
lcp@1099
|
315 |
|
lcp@1099
|
316 |
\subsection{Modifying the search step}
|
oheimb@4665
|
317 |
\label{sec:modifying-search}
|
paulson@3716
|
318 |
For a given classical set, the proof strategy is simple. Perform as many safe
|
paulson@3716
|
319 |
inferences as possible; or else, apply certain safe rules, allowing
|
paulson@3716
|
320 |
instantiation of unknowns; or else, apply an unsafe rule. The tactics also
|
paulson@3716
|
321 |
eliminate assumptions of the form $x=t$ by substitution if they have been set
|
paulson@3720
|
322 |
up to do so (see \texttt{hyp_subst_tacs} in~\S\ref{sec:classical-setup} below).
|
paulson@3716
|
323 |
They may perform a form of Modus Ponens: if there are assumptions $P\imp Q$
|
paulson@3716
|
324 |
and~$P$, then replace $P\imp Q$ by~$Q$.
|
lcp@104
|
325 |
|
paulson@3720
|
326 |
The classical reasoning tactics --- except \texttt{blast_tac}! --- allow
|
oheimb@4649
|
327 |
you to modify this basic proof strategy by applying two lists of arbitrary
|
oheimb@4649
|
328 |
{\bf wrapper tacticals} to it.
|
oheimb@4649
|
329 |
The first wrapper list, which is considered to contain safe wrappers only,
|
oheimb@4649
|
330 |
affects \ttindex{safe_step_tac} and all the tactics that call it.
|
oheimb@4649
|
331 |
The second one, which may contain unsafe wrappers, affects \ttindex{step_tac},
|
oheimb@4881
|
332 |
\ttindex{slow_step_tac}, and the tactics that call them.
|
oheimb@4649
|
333 |
A wrapper transforms each step of the search, for example
|
oheimb@4649
|
334 |
by invoking other tactics before or alternatively to the original step tactic.
|
oheimb@4649
|
335 |
All members of a wrapper list are applied in turn to the respective step tactic.
|
oheimb@4649
|
336 |
|
oheimb@4649
|
337 |
Initially the two wrapper lists are empty, which means no modification of the
|
oheimb@4649
|
338 |
step tactics. Safe and unsafe wrappers are added to a claset
|
oheimb@4649
|
339 |
with the functions given below, supplying them with wrapper names.
|
oheimb@4649
|
340 |
These names may be used to selectively delete wrappers.
|
lcp@1099
|
341 |
|
lcp@1099
|
342 |
\begin{ttbox}
|
oheimb@4649
|
343 |
type wrapper = (int -> tactic) -> (int -> tactic);
|
oheimb@4881
|
344 |
|
oheimb@4881
|
345 |
addSWrapper : claset * (string * wrapper ) -> claset \hfill{\bf infix 4}
|
oheimb@4649
|
346 |
addSbefore : claset * (string * (int -> tactic)) -> claset \hfill{\bf infix 4}
|
oheimb@4649
|
347 |
addSaltern : claset * (string * (int -> tactic)) -> claset \hfill{\bf infix 4}
|
oheimb@4649
|
348 |
delSWrapper : claset * string -> claset \hfill{\bf infix 4}
|
oheimb@4881
|
349 |
|
oheimb@4881
|
350 |
addWrapper : claset * (string * wrapper ) -> claset \hfill{\bf infix 4}
|
oheimb@4649
|
351 |
addbefore : claset * (string * (int -> tactic)) -> claset \hfill{\bf infix 4}
|
oheimb@4649
|
352 |
addaltern : claset * (string * (int -> tactic)) -> claset \hfill{\bf infix 4}
|
oheimb@4649
|
353 |
delWrapper : claset * string -> claset \hfill{\bf infix 4}
|
oheimb@4649
|
354 |
|
oheimb@4881
|
355 |
addSss : claset * simpset -> claset \hfill{\bf infix 4}
|
oheimb@2632
|
356 |
addss : claset * simpset -> claset \hfill{\bf infix 4}
|
lcp@1099
|
357 |
\end{ttbox}
|
lcp@1099
|
358 |
%
|
lcp@1099
|
359 |
|
lcp@1099
|
360 |
\begin{ttdescription}
|
oheimb@4881
|
361 |
\item[$cs$ addSWrapper $(name,wrapper)$] \indexbold{*addSWrapper}
|
oheimb@4881
|
362 |
adds a new wrapper, which should yield a safe tactic,
|
oheimb@4881
|
363 |
to modify the existing safe step tactic.
|
oheimb@4881
|
364 |
|
oheimb@4881
|
365 |
\item[$cs$ addSbefore $(name,tac)$] \indexbold{*addSbefore}
|
oheimb@4881
|
366 |
adds the given tactic as safe wrapper, such that it is
|
oheimb@4881
|
367 |
applied {\em before} each safe step of the search.
|
oheimb@4881
|
368 |
|
oheimb@4881
|
369 |
\item[$cs$ addSaltern $(name,tac)$] \indexbold{*addSaltern}
|
oheimb@4881
|
370 |
adds the given tactic as safe wrapper, such that it is
|
oheimb@4881
|
371 |
applied when a safe step of the search would fail.
|
oheimb@4881
|
372 |
|
oheimb@4881
|
373 |
\item[$cs$ delSWrapper $name$] \indexbold{*delSWrapper}
|
oheimb@4881
|
374 |
deletes the safe wrapper with the given name.
|
oheimb@4881
|
375 |
|
oheimb@4881
|
376 |
\item[$cs$ addWrapper $(name,wrapper)$] \indexbold{*addWrapper}
|
oheimb@4881
|
377 |
adds a new wrapper to modify the existing (unsafe) step tactic.
|
oheimb@4881
|
378 |
|
oheimb@4881
|
379 |
\item[$cs$ addbefore $(name,tac)$] \indexbold{*addbefore}
|
oheimb@4881
|
380 |
adds the given tactic as unsafe wrapper, such that it is
|
oheimb@4881
|
381 |
applied {\em before} each step of the search.
|
oheimb@4881
|
382 |
|
oheimb@4881
|
383 |
\item[$cs$ addaltern $(name,tac)$] \indexbold{*addaltern}
|
oheimb@4881
|
384 |
adds the given tactic as unsafe wrapper, such that it is
|
oheimb@4881
|
385 |
applied when a step of the search would fail.
|
oheimb@4881
|
386 |
|
oheimb@4881
|
387 |
\item[$cs$ delWrapper $name$] \indexbold{*delWrapper}
|
oheimb@4881
|
388 |
deletes the unsafe wrapper with the given name.
|
oheimb@4881
|
389 |
|
oheimb@4881
|
390 |
\item[$cs$ addSss $ss$] \indexbold{*addss}
|
oheimb@4881
|
391 |
adds the simpset~$ss$ to the classical set. The assumptions and goal will be
|
oheimb@4881
|
392 |
simplified, in a rather safe way, after each safe step of the search.
|
oheimb@4881
|
393 |
|
lcp@1099
|
394 |
\item[$cs$ addss $ss$] \indexbold{*addss}
|
paulson@3485
|
395 |
adds the simpset~$ss$ to the classical set. The assumptions and goal will be
|
oheimb@4881
|
396 |
simplified, before the each unsafe step of the search.
|
oheimb@2631
|
397 |
|
oheimb@4881
|
398 |
\end{ttdescription}
|
oheimb@2631
|
399 |
|
oheimb@4881
|
400 |
\index{simplification!from classical reasoner} The wrapper tacticals
|
oheimb@4881
|
401 |
underly the operators addss and addSss, which are used as primitives
|
oheimb@4885
|
402 |
for the automatic tactics described in \S\ref{sec:automatic-tactics}.
|
oheimb@4881
|
403 |
Strictly speaking, both operators are not part of the classical reasoner.
|
oheimb@4881
|
404 |
They should be defined when the simplifier is installed:
|
oheimb@4881
|
405 |
\begin{ttbox}
|
oheimb@4881
|
406 |
infix 4 addSss addss;
|
oheimb@4881
|
407 |
fun cs addSss ss = cs addSaltern ("safe_asm_full_simp_tac",
|
oheimb@4881
|
408 |
CHANGED o (safe_asm_full_simp_tac ss));
|
oheimb@4885
|
409 |
fun cs addss ss = cs addbefore ("asm_full_simp_tac",
|
oheimb@4885
|
410 |
asm_full_simp_tac ss);
|
oheimb@4881
|
411 |
\end{ttbox}
|
oheimb@4881
|
412 |
\begin{warn}
|
oheimb@4881
|
413 |
Being defined as wrappers, these operators are inappropriate for adding more
|
oheimb@4881
|
414 |
than one simpset at a time: the simpset added last overwrites any earlier ones.
|
oheimb@4881
|
415 |
When a simpset combined with a claset is to be augmented, this should done
|
oheimb@4881
|
416 |
{\em before} combining it with the claset.
|
oheimb@4881
|
417 |
\end{warn}
|
lcp@1099
|
418 |
|
lcp@104
|
419 |
|
lcp@104
|
420 |
\section{The classical tactics}
|
paulson@3716
|
421 |
\index{classical reasoner!tactics} If installed, the classical module provides
|
paulson@3716
|
422 |
powerful theorem-proving tactics. Most of them have capitalized analogues
|
paulson@3716
|
423 |
that use the default claset; see \S\ref{sec:current-claset}.
|
paulson@3716
|
424 |
|
paulson@3716
|
425 |
\subsection{Semi-automatic tactics}
|
paulson@3716
|
426 |
\begin{ttbox}
|
paulson@3716
|
427 |
clarify_tac : claset -> int -> tactic
|
paulson@3716
|
428 |
clarify_step_tac : claset -> int -> tactic
|
paulson@3716
|
429 |
\end{ttbox}
|
paulson@3716
|
430 |
Use these when the automatic tactics fail. They perform all the obvious
|
paulson@3716
|
431 |
logical inferences that do not split the subgoal. The result is a
|
paulson@3716
|
432 |
simpler subgoal that can be tackled by other means, such as by
|
paulson@3716
|
433 |
instantiating quantifiers yourself.
|
paulson@3716
|
434 |
\begin{ttdescription}
|
paulson@3716
|
435 |
\item[\ttindexbold{clarify_tac} $cs$ $i$] performs a series of safe steps on
|
paulson@4597
|
436 |
subgoal~$i$ by repeatedly calling \texttt{clarify_step_tac}.
|
paulson@3716
|
437 |
|
paulson@3716
|
438 |
\item[\ttindexbold{clarify_step_tac} $cs$ $i$] performs a safe step on
|
paulson@3716
|
439 |
subgoal~$i$. No splitting step is applied; for example, the subgoal $A\conj
|
paulson@3716
|
440 |
B$ is left as a conjunction. Proof by assumption, Modus Ponens, etc., may be
|
paulson@3716
|
441 |
performed provided they do not instantiate unknowns. Assumptions of the
|
paulson@3716
|
442 |
form $x=t$ may be eliminated. The user-supplied safe wrapper tactical is
|
paulson@3716
|
443 |
applied.
|
paulson@3716
|
444 |
\end{ttdescription}
|
paulson@3716
|
445 |
|
lcp@104
|
446 |
|
paulson@3224
|
447 |
\subsection{The tableau prover}
|
paulson@3720
|
448 |
The tactic \texttt{blast_tac} searches for a proof using a fast tableau prover,
|
paulson@3224
|
449 |
coded directly in \ML. It then reconstructs the proof using Isabelle
|
paulson@3224
|
450 |
tactics. It is faster and more powerful than the other classical
|
paulson@3224
|
451 |
reasoning tactics, but has major limitations too.
|
paulson@3089
|
452 |
\begin{itemize}
|
paulson@3089
|
453 |
\item It does not use the wrapper tacticals described above, such as
|
paulson@3089
|
454 |
\ttindex{addss}.
|
paulson@3089
|
455 |
\item It ignores types, which can cause problems in \HOL. If it applies a rule
|
paulson@3089
|
456 |
whose types are inappropriate, then proof reconstruction will fail.
|
paulson@3089
|
457 |
\item It does not perform higher-order unification, as needed by the rule {\tt
|
paulson@3720
|
458 |
rangeI} in {\HOL} and \texttt{RepFunI} in {\ZF}. There are often
|
paulson@3089
|
459 |
alternatives to such rules, for example {\tt
|
paulson@3720
|
460 |
range_eqI} and \texttt{RepFun_eqI}.
|
paulson@3089
|
461 |
\item The message {\small\tt Function Var's argument not a bound variable\ }
|
paulson@3089
|
462 |
relates to the lack of higher-order unification. Function variables
|
paulson@3089
|
463 |
may only be applied to parameters of the subgoal.
|
paulson@3720
|
464 |
\item Its proof strategy is more general than \texttt{fast_tac}'s but can be
|
paulson@3720
|
465 |
slower. If \texttt{blast_tac} fails or seems to be running forever, try {\tt
|
paulson@3089
|
466 |
fast_tac} and the other tactics described below.
|
paulson@3089
|
467 |
\end{itemize}
|
paulson@3089
|
468 |
%
|
paulson@3089
|
469 |
\begin{ttbox}
|
paulson@3089
|
470 |
blast_tac : claset -> int -> tactic
|
paulson@3089
|
471 |
Blast.depth_tac : claset -> int -> int -> tactic
|
paulson@3089
|
472 |
Blast.trace : bool ref \hfill{\bf initially false}
|
paulson@3089
|
473 |
\end{ttbox}
|
paulson@3089
|
474 |
The two tactics differ on how they bound the number of unsafe steps used in a
|
paulson@3720
|
475 |
proof. While \texttt{blast_tac} starts with a bound of zero and increases it
|
paulson@3720
|
476 |
successively to~20, \texttt{Blast.depth_tac} applies a user-supplied search bound.
|
paulson@3089
|
477 |
\begin{ttdescription}
|
paulson@3089
|
478 |
\item[\ttindexbold{blast_tac} $cs$ $i$] tries to prove
|
paulson@3089
|
479 |
subgoal~$i$ using iterative deepening to increase the search bound.
|
paulson@3089
|
480 |
|
paulson@3089
|
481 |
\item[\ttindexbold{Blast.depth_tac} $cs$ $lim$ $i$] tries
|
paulson@3089
|
482 |
to prove subgoal~$i$ using a search bound of $lim$. Often a slow
|
paulson@3720
|
483 |
proof using \texttt{blast_tac} can be made much faster by supplying the
|
paulson@3089
|
484 |
successful search bound to this tactic instead.
|
paulson@3089
|
485 |
|
wenzelm@4317
|
486 |
\item[set \ttindexbold{Blast.trace};] \index{tracing!of classical prover}
|
paulson@3089
|
487 |
causes the tableau prover to print a trace of its search. At each step it
|
paulson@3089
|
488 |
displays the formula currently being examined and reports whether the branch
|
paulson@3089
|
489 |
has been closed, extended or split.
|
paulson@3089
|
490 |
\end{ttdescription}
|
paulson@3089
|
491 |
|
paulson@3224
|
492 |
|
oheimb@4881
|
493 |
\subsection{Automatic tactics}\label{sec:automatic-tactics}
|
paulson@3224
|
494 |
\begin{ttbox}
|
oheimb@4881
|
495 |
type clasimpset = claset * simpset;
|
oheimb@4881
|
496 |
auto_tac : clasimpset -> tactic
|
oheimb@4881
|
497 |
force_tac : clasimpset -> int -> tactic
|
oheimb@4881
|
498 |
auto : unit -> unit
|
oheimb@4881
|
499 |
force : int -> unit
|
paulson@3224
|
500 |
\end{ttbox}
|
oheimb@4881
|
501 |
The automatic tactics attempt to prove goals using a combination of
|
oheimb@4881
|
502 |
simplification and classical reasoning.
|
oheimb@4885
|
503 |
\begin{ttdescription}
|
oheimb@4881
|
504 |
\item[\ttindexbold{auto_tac $(cs,ss)$}] is intended for situations where
|
oheimb@4881
|
505 |
there are a lot of mostly trivial subgoals; it proves all the easy ones,
|
oheimb@4881
|
506 |
leaving the ones it cannot prove.
|
oheimb@4881
|
507 |
(Unfortunately, attempting to prove the hard ones may take a long time.)
|
oheimb@4881
|
508 |
\item[\ttindexbold{force_tac} $(cs,ss)$ $i$] is intended to prove subgoal~$i$
|
oheimb@4881
|
509 |
completely. It tries to apply all fancy tactics it knows about,
|
oheimb@4881
|
510 |
performing a rather exhaustive search.
|
oheimb@4885
|
511 |
\end{ttdescription}
|
oheimb@4881
|
512 |
They must be supplied both a simpset and a claset; therefore
|
oheimb@4881
|
513 |
they are most easily called as \texttt{Auto_tac} and \texttt{Force_tac}, which
|
oheimb@4881
|
514 |
use the default claset and simpset (see \S\ref{sec:current-claset} below).
|
oheimb@4885
|
515 |
For interactive use,
|
oheimb@4885
|
516 |
the shorthand \texttt{auto();} abbreviates \texttt{by Auto_tac;}
|
oheimb@4885
|
517 |
while \texttt{force 1;} abbreviates \texttt{by (Force_tac 1);}
|
paulson@3224
|
518 |
|
paulson@3224
|
519 |
\subsection{Other classical tactics}
|
lcp@332
|
520 |
\begin{ttbox}
|
lcp@875
|
521 |
fast_tac : claset -> int -> tactic
|
lcp@875
|
522 |
best_tac : claset -> int -> tactic
|
lcp@875
|
523 |
slow_tac : claset -> int -> tactic
|
lcp@875
|
524 |
slow_best_tac : claset -> int -> tactic
|
lcp@332
|
525 |
\end{ttbox}
|
paulson@3224
|
526 |
These tactics attempt to prove a subgoal using sequent-style reasoning.
|
paulson@3224
|
527 |
Unlike \texttt{blast_tac}, they construct proofs directly in Isabelle. Their
|
paulson@3720
|
528 |
effect is restricted (by \texttt{SELECT_GOAL}) to one subgoal; they either prove
|
paulson@3720
|
529 |
this subgoal or fail. The \texttt{slow_} versions conduct a broader
|
paulson@3224
|
530 |
search.%
|
paulson@3224
|
531 |
\footnote{They may, when backtracking from a failed proof attempt, undo even
|
paulson@3224
|
532 |
the step of proving a subgoal by assumption.}
|
lcp@875
|
533 |
|
lcp@875
|
534 |
The best-first tactics are guided by a heuristic function: typically, the
|
lcp@875
|
535 |
total size of the proof state. This function is supplied in the functor call
|
lcp@875
|
536 |
that sets up the classical reasoner.
|
lcp@332
|
537 |
\begin{ttdescription}
|
paulson@3720
|
538 |
\item[\ttindexbold{fast_tac} $cs$ $i$] applies \texttt{step_tac} using
|
paulson@3089
|
539 |
depth-first search, to prove subgoal~$i$.
|
lcp@332
|
540 |
|
paulson@3720
|
541 |
\item[\ttindexbold{best_tac} $cs$ $i$] applies \texttt{step_tac} using
|
paulson@3089
|
542 |
best-first search, to prove subgoal~$i$.
|
lcp@875
|
543 |
|
paulson@3720
|
544 |
\item[\ttindexbold{slow_tac} $cs$ $i$] applies \texttt{slow_step_tac} using
|
paulson@3089
|
545 |
depth-first search, to prove subgoal~$i$.
|
lcp@875
|
546 |
|
paulson@3720
|
547 |
\item[\ttindexbold{slow_best_tac} $cs$ $i$] applies \texttt{slow_step_tac} using
|
paulson@3089
|
548 |
best-first search, to prove subgoal~$i$.
|
lcp@875
|
549 |
\end{ttdescription}
|
lcp@875
|
550 |
|
lcp@875
|
551 |
|
paulson@3716
|
552 |
\subsection{Depth-limited automatic tactics}
|
lcp@875
|
553 |
\begin{ttbox}
|
lcp@875
|
554 |
depth_tac : claset -> int -> int -> tactic
|
lcp@875
|
555 |
deepen_tac : claset -> int -> int -> tactic
|
lcp@875
|
556 |
\end{ttbox}
|
lcp@875
|
557 |
These work by exhaustive search up to a specified depth. Unsafe rules are
|
lcp@875
|
558 |
modified to preserve the formula they act on, so that it be used repeatedly.
|
paulson@3720
|
559 |
They can prove more goals than \texttt{fast_tac} can but are much
|
lcp@875
|
560 |
slower, for example if the assumptions have many universal quantifiers.
|
lcp@875
|
561 |
|
lcp@875
|
562 |
The depth limits the number of unsafe steps. If you can estimate the minimum
|
lcp@875
|
563 |
number of unsafe steps needed, supply this value as~$m$ to save time.
|
lcp@875
|
564 |
\begin{ttdescription}
|
lcp@875
|
565 |
\item[\ttindexbold{depth_tac} $cs$ $m$ $i$]
|
paulson@3089
|
566 |
tries to prove subgoal~$i$ by exhaustive search up to depth~$m$.
|
lcp@875
|
567 |
|
lcp@875
|
568 |
\item[\ttindexbold{deepen_tac} $cs$ $m$ $i$]
|
paulson@3720
|
569 |
tries to prove subgoal~$i$ by iterative deepening. It calls \texttt{depth_tac}
|
lcp@875
|
570 |
repeatedly with increasing depths, starting with~$m$.
|
lcp@332
|
571 |
\end{ttdescription}
|
lcp@332
|
572 |
|
lcp@332
|
573 |
|
lcp@104
|
574 |
\subsection{Single-step tactics}
|
lcp@104
|
575 |
\begin{ttbox}
|
lcp@104
|
576 |
safe_step_tac : claset -> int -> tactic
|
lcp@104
|
577 |
safe_tac : claset -> tactic
|
lcp@104
|
578 |
inst_step_tac : claset -> int -> tactic
|
lcp@104
|
579 |
step_tac : claset -> int -> tactic
|
lcp@104
|
580 |
slow_step_tac : claset -> int -> tactic
|
lcp@104
|
581 |
\end{ttbox}
|
lcp@104
|
582 |
The automatic proof procedures call these tactics. By calling them
|
lcp@104
|
583 |
yourself, you can execute these procedures one step at a time.
|
lcp@308
|
584 |
\begin{ttdescription}
|
lcp@104
|
585 |
\item[\ttindexbold{safe_step_tac} $cs$ $i$] performs a safe step on
|
oheimb@4881
|
586 |
subgoal~$i$. The safe wrapper tacticals are applied to a tactic that may
|
paulson@3716
|
587 |
include proof by assumption or Modus Ponens (taking care not to instantiate
|
paulson@3716
|
588 |
unknowns), or substitution.
|
lcp@104
|
589 |
|
lcp@104
|
590 |
\item[\ttindexbold{safe_tac} $cs$] repeatedly performs safe steps on all
|
paulson@3716
|
591 |
subgoals. It is deterministic, with at most one outcome.
|
lcp@104
|
592 |
|
paulson@3720
|
593 |
\item[\ttindexbold{inst_step_tac} $cs$ $i$] is like \texttt{safe_step_tac},
|
lcp@104
|
594 |
but allows unknowns to be instantiated.
|
lcp@104
|
595 |
|
lcp@1099
|
596 |
\item[\ttindexbold{step_tac} $cs$ $i$] is the basic step of the proof
|
oheimb@4881
|
597 |
procedure. The (unsafe) wrapper tacticals are applied to a tactic that tries
|
paulson@3720
|
598 |
\texttt{safe_tac}, \texttt{inst_step_tac}, or applies an unsafe rule from~$cs$.
|
lcp@104
|
599 |
|
lcp@104
|
600 |
\item[\ttindexbold{slow_step_tac}]
|
paulson@3720
|
601 |
resembles \texttt{step_tac}, but allows backtracking between using safe
|
paulson@3720
|
602 |
rules with instantiation (\texttt{inst_step_tac}) and using unsafe rules.
|
lcp@875
|
603 |
The resulting search space is larger.
|
lcp@308
|
604 |
\end{ttdescription}
|
lcp@104
|
605 |
|
paulson@3224
|
606 |
\subsection{The current claset}\label{sec:current-claset}
|
wenzelm@4561
|
607 |
|
wenzelm@4561
|
608 |
Each theory is equipped with an implicit \emph{current
|
wenzelm@4561
|
609 |
claset}\index{claset!current}. This is a default set of classical
|
wenzelm@4561
|
610 |
rules. The underlying idea is quite similar to that of a current
|
wenzelm@4561
|
611 |
simpset described in \S\ref{sec:simp-for-dummies}; please read that
|
wenzelm@4561
|
612 |
section, including its warnings. The implicit claset can be accessed
|
wenzelm@4561
|
613 |
as follows:
|
wenzelm@4561
|
614 |
\begin{ttbox}
|
wenzelm@4561
|
615 |
claset : unit -> claset
|
wenzelm@4561
|
616 |
claset_ref : unit -> claset ref
|
wenzelm@4561
|
617 |
claset_of : theory -> claset
|
wenzelm@4561
|
618 |
claset_ref_of : theory -> claset ref
|
wenzelm@4561
|
619 |
print_claset : theory -> unit
|
wenzelm@4561
|
620 |
\end{ttbox}
|
wenzelm@4561
|
621 |
|
wenzelm@4561
|
622 |
The tactics
|
berghofe@1869
|
623 |
\begin{ttbox}
|
paulson@3716
|
624 |
Blast_tac : int -> tactic
|
paulson@4507
|
625 |
Auto_tac : tactic
|
oheimb@4881
|
626 |
Force_tac : int -> tactic
|
paulson@3716
|
627 |
Fast_tac : int -> tactic
|
paulson@3716
|
628 |
Best_tac : int -> tactic
|
paulson@3716
|
629 |
Deepen_tac : int -> int -> tactic
|
paulson@3716
|
630 |
Clarify_tac : int -> tactic
|
paulson@3716
|
631 |
Clarify_step_tac : int -> tactic
|
paulson@3720
|
632 |
Safe_tac : tactic
|
paulson@3720
|
633 |
Safe_step_tac : int -> tactic
|
paulson@3716
|
634 |
Step_tac : int -> tactic
|
berghofe@1869
|
635 |
\end{ttbox}
|
oheimb@4881
|
636 |
\indexbold{*Blast_tac}\indexbold{*Auto_tac}\indexbold{*Force_tac}
|
paulson@3224
|
637 |
\indexbold{*Best_tac}\indexbold{*Fast_tac}%
|
paulson@3720
|
638 |
\indexbold{*Deepen_tac}
|
paulson@3720
|
639 |
\indexbold{*Clarify_tac}\indexbold{*Clarify_step_tac}
|
paulson@3720
|
640 |
\indexbold{*Safe_tac}\indexbold{*Safe_step_tac}
|
paulson@3720
|
641 |
\indexbold{*Step_tac}
|
paulson@3720
|
642 |
make use of the current claset. For example, \texttt{Blast_tac} is defined as
|
berghofe@1869
|
643 |
\begin{ttbox}
|
wenzelm@4561
|
644 |
fun Blast_tac i st = blast_tac (claset()) i st;
|
berghofe@1869
|
645 |
\end{ttbox}
|
wenzelm@4561
|
646 |
and gets the current claset, only after it is applied to a proof
|
wenzelm@4561
|
647 |
state. The functions
|
berghofe@1869
|
648 |
\begin{ttbox}
|
berghofe@1869
|
649 |
AddSIs, AddSEs, AddSDs, AddIs, AddEs, AddDs: thm list -> unit
|
berghofe@1869
|
650 |
\end{ttbox}
|
berghofe@1869
|
651 |
\indexbold{*AddSIs} \indexbold{*AddSEs} \indexbold{*AddSDs}
|
berghofe@1869
|
652 |
\indexbold{*AddIs} \indexbold{*AddEs} \indexbold{*AddDs}
|
paulson@3485
|
653 |
are used to add rules to the current claset. They work exactly like their
|
paulson@3720
|
654 |
lower case counterparts, such as \texttt{addSIs}. Calling
|
berghofe@1869
|
655 |
\begin{ttbox}
|
berghofe@1869
|
656 |
Delrules : thm list -> unit
|
berghofe@1869
|
657 |
\end{ttbox}
|
paulson@3224
|
658 |
deletes rules from the current claset.
|
lcp@104
|
659 |
|
lcp@104
|
660 |
\subsection{Other useful tactics}
|
lcp@319
|
661 |
\index{tactics!for contradiction}
|
lcp@319
|
662 |
\index{tactics!for Modus Ponens}
|
lcp@104
|
663 |
\begin{ttbox}
|
lcp@104
|
664 |
contr_tac : int -> tactic
|
lcp@104
|
665 |
mp_tac : int -> tactic
|
lcp@104
|
666 |
eq_mp_tac : int -> tactic
|
lcp@104
|
667 |
swap_res_tac : thm list -> int -> tactic
|
lcp@104
|
668 |
\end{ttbox}
|
lcp@104
|
669 |
These can be used in the body of a specialized search.
|
lcp@308
|
670 |
\begin{ttdescription}
|
lcp@319
|
671 |
\item[\ttindexbold{contr_tac} {\it i}]\index{assumptions!contradictory}
|
lcp@319
|
672 |
solves subgoal~$i$ by detecting a contradiction among two assumptions of
|
lcp@319
|
673 |
the form $P$ and~$\neg P$, or fail. It may instantiate unknowns. The
|
lcp@319
|
674 |
tactic can produce multiple outcomes, enumerating all possible
|
lcp@319
|
675 |
contradictions.
|
lcp@104
|
676 |
|
lcp@104
|
677 |
\item[\ttindexbold{mp_tac} {\it i}]
|
paulson@3720
|
678 |
is like \texttt{contr_tac}, but also attempts to perform Modus Ponens in
|
lcp@104
|
679 |
subgoal~$i$. If there are assumptions $P\imp Q$ and~$P$, then it replaces
|
lcp@104
|
680 |
$P\imp Q$ by~$Q$. It may instantiate unknowns. It fails if it can do
|
lcp@104
|
681 |
nothing.
|
lcp@104
|
682 |
|
lcp@104
|
683 |
\item[\ttindexbold{eq_mp_tac} {\it i}]
|
paulson@3720
|
684 |
is like \texttt{mp_tac} {\it i}, but may not instantiate unknowns --- thus, it
|
lcp@104
|
685 |
is safe.
|
lcp@104
|
686 |
|
lcp@104
|
687 |
\item[\ttindexbold{swap_res_tac} {\it thms} {\it i}] refines subgoal~$i$ of
|
lcp@104
|
688 |
the proof state using {\it thms}, which should be a list of introduction
|
paulson@3720
|
689 |
rules. First, it attempts to prove the goal using \texttt{assume_tac} or
|
paulson@3720
|
690 |
\texttt{contr_tac}. It then attempts to apply each rule in turn, attempting
|
lcp@104
|
691 |
resolution and also elim-resolution with the swapped form.
|
lcp@308
|
692 |
\end{ttdescription}
|
lcp@104
|
693 |
|
lcp@104
|
694 |
\subsection{Creating swapped rules}
|
lcp@104
|
695 |
\begin{ttbox}
|
lcp@104
|
696 |
swapify : thm list -> thm list
|
lcp@104
|
697 |
joinrules : thm list * thm list -> (bool * thm) list
|
lcp@104
|
698 |
\end{ttbox}
|
lcp@308
|
699 |
\begin{ttdescription}
|
lcp@104
|
700 |
\item[\ttindexbold{swapify} {\it thms}] returns a list consisting of the
|
lcp@104
|
701 |
swapped versions of~{\it thms}, regarded as introduction rules.
|
lcp@104
|
702 |
|
lcp@308
|
703 |
\item[\ttindexbold{joinrules} ({\it intrs}, {\it elims})]
|
lcp@104
|
704 |
joins introduction rules, their swapped versions, and elimination rules for
|
paulson@3720
|
705 |
use with \ttindex{biresolve_tac}. Each rule is paired with~\texttt{false}
|
paulson@3720
|
706 |
(indicating ordinary resolution) or~\texttt{true} (indicating
|
lcp@104
|
707 |
elim-resolution).
|
lcp@308
|
708 |
\end{ttdescription}
|
lcp@104
|
709 |
|
lcp@104
|
710 |
|
paulson@3716
|
711 |
\section{Setting up the classical reasoner}\label{sec:classical-setup}
|
lcp@319
|
712 |
\index{classical reasoner!setting up}
|
paulson@3720
|
713 |
Isabelle's classical object-logics, including \texttt{FOL} and \texttt{HOL}, have
|
lcp@286
|
714 |
the classical reasoner already set up. When defining a new classical logic,
|
lcp@286
|
715 |
you should set up the reasoner yourself. It consists of the \ML{} functor
|
lcp@104
|
716 |
\ttindex{ClassicalFun}, which takes the argument
|
paulson@3720
|
717 |
signature \texttt{
|
paulson@3720
|
718 |
CLASSICAL_DATA}:
|
lcp@104
|
719 |
\begin{ttbox}
|
lcp@104
|
720 |
signature CLASSICAL_DATA =
|
lcp@104
|
721 |
sig
|
lcp@104
|
722 |
val mp : thm
|
lcp@104
|
723 |
val not_elim : thm
|
lcp@104
|
724 |
val swap : thm
|
lcp@104
|
725 |
val sizef : thm -> int
|
lcp@104
|
726 |
val hyp_subst_tacs : (int -> tactic) list
|
lcp@104
|
727 |
end;
|
lcp@104
|
728 |
\end{ttbox}
|
lcp@104
|
729 |
Thus, the functor requires the following items:
|
lcp@308
|
730 |
\begin{ttdescription}
|
lcp@319
|
731 |
\item[\tdxbold{mp}] should be the Modus Ponens rule
|
lcp@104
|
732 |
$\List{\Var{P}\imp\Var{Q};\; \Var{P}} \Imp \Var{Q}$.
|
lcp@104
|
733 |
|
lcp@319
|
734 |
\item[\tdxbold{not_elim}] should be the contradiction rule
|
lcp@104
|
735 |
$\List{\neg\Var{P};\; \Var{P}} \Imp \Var{R}$.
|
lcp@104
|
736 |
|
lcp@319
|
737 |
\item[\tdxbold{swap}] should be the swap rule
|
lcp@104
|
738 |
$\List{\neg \Var{P}; \; \neg \Var{R}\Imp \Var{P}} \Imp \Var{R}$.
|
lcp@104
|
739 |
|
lcp@104
|
740 |
\item[\ttindexbold{sizef}] is the heuristic function used for best-first
|
lcp@104
|
741 |
search. It should estimate the size of the remaining subgoals. A good
|
lcp@104
|
742 |
heuristic function is \ttindex{size_of_thm}, which measures the size of the
|
lcp@104
|
743 |
proof state. Another size function might ignore certain subgoals (say,
|
lcp@104
|
744 |
those concerned with type checking). A heuristic function might simply
|
lcp@104
|
745 |
count the subgoals.
|
lcp@104
|
746 |
|
lcp@319
|
747 |
\item[\ttindexbold{hyp_subst_tacs}] is a list of tactics for substitution in
|
lcp@104
|
748 |
the hypotheses, typically created by \ttindex{HypsubstFun} (see
|
lcp@104
|
749 |
Chapter~\ref{substitution}). This list can, of course, be empty. The
|
lcp@104
|
750 |
tactics are assumed to be safe!
|
lcp@308
|
751 |
\end{ttdescription}
|
lcp@104
|
752 |
The functor is not at all sensitive to the formalization of the
|
wenzelm@3108
|
753 |
object-logic. It does not even examine the rules, but merely applies
|
wenzelm@3108
|
754 |
them according to its fixed strategy. The functor resides in {\tt
|
wenzelm@3108
|
755 |
Provers/classical.ML} in the Isabelle sources.
|
lcp@104
|
756 |
|
lcp@319
|
757 |
\index{classical reasoner|)}
|