Corrected example which still used old primrec syntax.
2 \chapter{Simplification}
3 \label{chap:simplification}
4 \index{simplification|(}
6 This chapter describes Isabelle's generic simplification package. It
7 performs conditional and unconditional rewriting and uses contextual
8 information (`local assumptions'). It provides several general hooks,
9 which can provide automatic case splits during rewriting, for example.
10 The simplifier is already set up for many of Isabelle's logics: \FOL,
13 The first section is a quick introduction to the simplifier that
14 should be sufficient to get started. The later sections explain more
18 \section{Simplification for dummies}
19 \label{sec:simp-for-dummies}
21 Basic use of the simplifier is particularly easy because each theory
22 is equipped with sensible default information controlling the rewrite
23 process --- namely the implicit {\em current
24 simpset}\index{simpset!current}. A suite of simple commands is
25 provided that refer to the implicit simpset of the current theory
29 Make sure that you are working within the correct theory context.
30 Executing proofs interactively, or loading them from ML files
31 without associated theories may require setting the current theory
32 manually via the \ttindex{context} command.
35 \subsection{Simplification tactics} \label{sec:simp-for-dummies-tacs}
37 Simp_tac : int -> tactic
38 Asm_simp_tac : int -> tactic
39 Full_simp_tac : int -> tactic
40 Asm_full_simp_tac : int -> tactic
41 trace_simp : bool ref \hfill{\bf initially false}
42 debug_simp : bool ref \hfill{\bf initially false}
46 \item[\ttindexbold{Simp_tac} $i$] simplifies subgoal~$i$ using the
47 current simpset. It may solve the subgoal completely if it has
48 become trivial, using the simpset's solver tactic.
50 \item[\ttindexbold{Asm_simp_tac}]\index{assumptions!in simplification}
51 is like \verb$Simp_tac$, but extracts additional rewrite rules from
52 the local assumptions.
54 \item[\ttindexbold{Full_simp_tac}] is like \verb$Simp_tac$, but also
55 simplifies the assumptions (without using the assumptions to
56 simplify each other or the actual goal).
58 \item[\ttindexbold{Asm_full_simp_tac}] is like \verb$Asm_simp_tac$,
59 but also simplifies the assumptions. In particular, assumptions can
61 \footnote{\texttt{Asm_full_simp_tac} used to process the assumptions from
62 left to right. For backwards compatibilty reasons only there is now
63 \texttt{Asm_lr_simp_tac} that behaves like the old \texttt{Asm_full_simp_tac}.}
64 \item[set \ttindexbold{trace_simp};] makes the simplifier output internal
65 operations. This includes rewrite steps, but also bookkeeping like
66 modifications of the simpset.
67 \item[set \ttindexbold{debug_simp};] makes the simplifier output some extra
68 information about internal operations. This includes any attempted
69 invocation of simplification procedures.
74 As an example, consider the theory of arithmetic in \HOL. The (rather
75 trivial) goal $0 + (x + 0) = x + 0 + 0$ can be solved by a single call
76 of \texttt{Simp_tac} as follows:
79 Goal "0 + (x + 0) = x + 0 + 0";
80 {\out 1. 0 + (x + 0) = x + 0 + 0}
83 {\out 0 + (x + 0) = x + 0 + 0}
87 The simplifier uses the current simpset of \texttt{Arith.thy}, which
88 contains suitable theorems like $\Var{n}+0 = \Var{n}$ and $0+\Var{n} =
91 \medskip In many cases, assumptions of a subgoal are also needed in
92 the simplification process. For example, \texttt{x = 0 ==> x + x = 0}
93 is solved by \texttt{Asm_simp_tac} as follows:
95 {\out 1. x = 0 ==> x + x = 0}
99 \medskip \texttt{Asm_full_simp_tac} is the most powerful of this quartet
100 of tactics but may also loop where some of the others terminate. For
103 {\out 1. ALL x. f x = g (f (g x)) ==> f 0 = f 0 + 0}
105 is solved by \texttt{Simp_tac}, but \texttt{Asm_simp_tac} and {\tt
106 Asm_simp_tac} loop because the rewrite rule $f\,\Var{x} =
107 g\,(f\,(g\,\Var{x}))$ extracted from the assumption does not
108 terminate. Isabelle notices certain simple forms of nontermination,
109 but not this one. Because assumptions may simplify each other, there can be
110 very subtle cases of nontermination.
113 \verb$Asm_full_simp_tac$ may miss opportunities to simplify an assumption
114 $A@i$ using an assumption $A@j$ in case $A@j$ is to the right of $A@i$.
115 For example, given the subgoal
117 {\out [| \dots f a \dots; P a; ALL x. P x --> f x = g x |] ==> \dots}
119 \verb$Asm_full_simp_tac$ will not simplify the \texttt{f a} on the left.
120 This problem can be overcome by reordering assumptions (see
121 \S\ref{sec:reordering-asms}). Future versions of \verb$Asm_full_simp_tac$
122 will not suffer from this deficiency.
127 Using the simplifier effectively may take a bit of experimentation.
128 Set the \verb$trace_simp$\index{tracing!of simplification} flag to get
129 a better idea of what is going on. The resulting output can be
130 enormous, especially since invocations of the simplifier are often
131 nested (e.g.\ when solving conditions of rewrite rules).
134 \subsection{Modifying the current simpset}
136 Addsimps : thm list -> unit
137 Delsimps : thm list -> unit
138 Addsimprocs : simproc list -> unit
139 Delsimprocs : simproc list -> unit
140 Addcongs : thm list -> unit
141 Delcongs : thm list -> unit
142 Addsplits : thm list -> unit
143 Delsplits : thm list -> unit
146 Depending on the theory context, the \texttt{Add} and \texttt{Del}
147 functions manipulate basic components of the associated current
148 simpset. Internally, all rewrite rules have to be expressed as
149 (conditional) meta-equalities. This form is derived automatically
150 from object-level equations that are supplied by the user. Another
151 source of rewrite rules are \emph{simplification procedures}, that is
152 \ML\ functions that produce suitable theorems on demand, depending on
153 the current redex. Congruences are a more advanced feature; see
154 \S\ref{sec:simp-congs}.
156 \begin{ttdescription}
158 \item[\ttindexbold{Addsimps} $thms$;] adds rewrite rules derived from
159 $thms$ to the current simpset.
161 \item[\ttindexbold{Delsimps} $thms$;] deletes rewrite rules derived
162 from $thms$ from the current simpset.
164 \item[\ttindexbold{Addsimprocs} $procs$;] adds simplification
165 procedures $procs$ to the current simpset.
167 \item[\ttindexbold{Delsimprocs} $procs$;] deletes simplification
168 procedures $procs$ from the current simpset.
170 \item[\ttindexbold{Addcongs} $thms$;] adds congruence rules to the
173 \item[\ttindexbold{Delcongs} $thms$;] deletes congruence rules from the
176 \item[\ttindexbold{Addsplits} $thms$;] adds splitting rules to the
179 \item[\ttindexbold{Delsplits} $thms$;] deletes splitting rules from the
184 When a new theory is built, its implicit simpset is initialized by the
185 union of the respective simpsets of its parent theories. In addition,
186 certain theory definition constructs (e.g.\ \ttindex{datatype} and
187 \ttindex{primrec} in \HOL) implicitly augment the current simpset.
188 Ordinary definitions are not added automatically!
190 It is up the user to manipulate the current simpset further by
191 explicitly adding or deleting theorems and simplification procedures.
195 Good simpsets are hard to design. Rules that obviously simplify,
196 like $\Var{n}+0 = \Var{n}$, should be added to the current simpset right after
197 they have been proved. More specific ones (such as distributive laws, which
198 duplicate subterms) should be added only for specific proofs and deleted
199 afterwards. Conversely, sometimes a rule needs
200 to be removed for a certain proof and restored afterwards. The need of
201 frequent additions or deletions may indicate a badly designed
205 The union of the parent simpsets (as described above) is not always
206 a good starting point for the new theory. If some ancestors have
207 deleted simplification rules because they are no longer wanted,
208 while others have left those rules in, then the union will contain
209 the unwanted rules. After this union is formed, changes to
210 a parent simpset have no effect on the child simpset.
214 \section{Simplification sets}\index{simplification sets}
216 The simplifier is controlled by information contained in {\bf
217 simpsets}. These consist of several components, including rewrite
218 rules, simplification procedures, congruence rules, and the subgoaler,
219 solver and looper tactics. The simplifier should be set up with
220 sensible defaults so that most simplifier calls specify only rewrite
221 rules or simplification procedures. Experienced users can exploit the
222 other components to streamline proofs in more sophisticated manners.
224 \subsection{Inspecting simpsets}
226 print_ss : simpset -> unit
227 rep_ss : simpset -> \{mss : meta_simpset,
228 subgoal_tac: simpset -> int -> tactic,
229 loop_tacs : (string * (int -> tactic))list,
230 finish_tac : solver list,
231 unsafe_finish_tac : solver list\}
233 \begin{ttdescription}
235 \item[\ttindexbold{print_ss} $ss$;] displays the printable contents of
236 simpset $ss$. This includes the rewrite rules and congruences in
237 their internal form expressed as meta-equalities. The names of the
238 simplification procedures and the patterns they are invoked on are
239 also shown. The other parts, functions and tactics, are
242 \item[\ttindexbold{rep_ss} $ss$;] decomposes $ss$ as a record of its internal
243 components, namely the meta_simpset, the subgoaler, the loop, and the safe
249 \subsection{Building simpsets}
252 merge_ss : simpset * simpset -> simpset
254 \begin{ttdescription}
256 \item[\ttindexbold{empty_ss}] is the empty simpset. This is not very
257 useful under normal circumstances because it doesn't contain
258 suitable tactics (subgoaler etc.). When setting up the simplifier
259 for a particular object-logic, one will typically define a more
260 appropriate ``almost empty'' simpset. For example, in \HOL\ this is
261 called \ttindexbold{HOL_basic_ss}.
263 \item[\ttindexbold{merge_ss} ($ss@1$, $ss@2$)] merges simpsets $ss@1$
264 and $ss@2$ by building the union of their respective rewrite rules,
265 simplification procedures and congruences. The other components
266 (tactics etc.) cannot be merged, though; they are taken from either
267 simpset\footnote{Actually from $ss@1$, but it would unwise to count
273 \subsection{Accessing the current simpset}
274 \label{sec:access-current-simpset}
277 simpset : unit -> simpset
278 simpset_ref : unit -> simpset ref
279 simpset_of : theory -> simpset
280 simpset_ref_of : theory -> simpset ref
281 print_simpset : theory -> unit
282 SIMPSET :(simpset -> tactic) -> tactic
283 SIMPSET' :(simpset -> 'a -> tactic) -> 'a -> tactic
286 Each theory contains a current simpset\index{simpset!current} stored
287 within a private ML reference variable. This can be retrieved and
290 \begin{ttdescription}
292 \item[\ttindexbold{simpset}();] retrieves the simpset value from the
293 current theory context.
295 \item[\ttindexbold{simpset_ref}();] retrieves the simpset reference
296 variable from the current theory context. This can be assigned to
297 by using \texttt{:=} in ML.
299 \item[\ttindexbold{simpset_of} $thy$;] retrieves the simpset value
302 \item[\ttindexbold{simpset_ref_of} $thy$;] retrieves the simpset
303 reference variable from theory $thy$.
305 \item[\ttindexbold{print_simpset} $thy$;] prints the current simpset
306 of theory $thy$ in the same way as \texttt{print_ss}.
308 \item[\ttindexbold{SIMPSET} $tacf$, \ttindexbold{SIMPSET'} $tacf'$]
309 are tacticals that make a tactic depend on the implicit current
310 simpset of the theory associated with the proof state they are
316 There is a small difference between \texttt{(SIMPSET'~$tacf$)} and
317 \texttt{($tacf\,$(simpset()))}. For example \texttt{(SIMPSET'
318 simp_tac)} would depend on the theory of the proof state it is
319 applied to, while \texttt{(simp_tac (simpset()))} implicitly refers
320 to the current theory context. Both are usually the same in proof
321 scripts, provided that goals are only stated within the current
322 theory. Robust programs would not count on that, of course.
326 \subsection{Rewrite rules}
328 addsimps : simpset * thm list -> simpset \hfill{\bf infix 4}
329 delsimps : simpset * thm list -> simpset \hfill{\bf infix 4}
332 \index{rewrite rules|(} Rewrite rules are theorems expressing some
333 form of equality, for example:
335 Suc(\Var{m}) + \Var{n} &=& \Var{m} + Suc(\Var{n}) \\
336 \Var{P}\conj\Var{P} &\bimp& \Var{P} \\
337 \Var{A} \un \Var{B} &\equiv& \{x.x\in \Var{A} \disj x\in \Var{B}\}
339 Conditional rewrites such as $\Var{m}<\Var{n} \Imp \Var{m}/\Var{n} =
340 0$ are also permitted; the conditions can be arbitrary formulas.
342 Internally, all rewrite rules are translated into meta-equalities,
343 theorems with conclusion $lhs \equiv rhs$. Each simpset contains a
344 function for extracting equalities from arbitrary theorems. For
345 example, $\lnot(\Var{x}\in \{\})$ could be turned into $\Var{x}\in \{\}
346 \equiv False$. This function can be installed using
347 \ttindex{setmksimps} but only the definer of a logic should need to do
348 this; see \S\ref{sec:setmksimps}. The function processes theorems
349 added by \texttt{addsimps} as well as local assumptions.
351 \begin{ttdescription}
353 \item[$ss$ \ttindexbold{addsimps} $thms$] adds rewrite rules derived
354 from $thms$ to the simpset $ss$.
356 \item[$ss$ \ttindexbold{delsimps} $thms$] deletes rewrite rules
357 derived from $thms$ from the simpset $ss$.
362 The simplifier will accept all standard rewrite rules: those where
363 all unknowns are of base type. Hence ${\Var{i}+(\Var{j}+\Var{k})} =
364 {(\Var{i}+\Var{j})+\Var{k}}$ is OK.
366 It will also deal gracefully with all rules whose left-hand sides
367 are so-called {\em higher-order patterns}~\cite{nipkow-patterns}.
368 \indexbold{higher-order pattern}\indexbold{pattern, higher-order}
369 These are terms in $\beta$-normal form (this will always be the case
370 unless you have done something strange) where each occurrence of an
371 unknown is of the form $\Var{F}(x@1,\dots,x@n)$, where the $x@i$ are
372 distinct bound variables. Hence $(\forall x.\Var{P}(x) \land
373 \Var{Q}(x)) \bimp (\forall x.\Var{P}(x)) \land (\forall
374 x.\Var{Q}(x))$ is also OK, in both directions.
376 In some rare cases the rewriter will even deal with quite general
377 rules: for example ${\Var{f}(\Var{x})\in range(\Var{f})} = True$
378 rewrites $g(a) \in range(g)$ to $True$, but will fail to match
379 $g(h(b)) \in range(\lambda x.g(h(x)))$. However, you can replace
380 the offending subterms (in our case $\Var{f}(\Var{x})$, which is not
381 a pattern) by adding new variables and conditions: $\Var{y} =
382 \Var{f}(\Var{x}) \Imp \Var{y}\in range(\Var{f}) = True$ is
383 acceptable as a conditional rewrite rule since conditions can be
386 There is basically no restriction on the form of the right-hand
387 sides. They may not contain extraneous term or type variables,
390 \index{rewrite rules|)}
393 \subsection{*Simplification procedures}
395 addsimprocs : simpset * simproc list -> simpset
396 delsimprocs : simpset * simproc list -> simpset
399 Simplification procedures are {\ML} objects of abstract type
400 \texttt{simproc}. Basically they are just functions that may produce
401 \emph{proven} rewrite rules on demand. They are associated with
402 certain patterns that conceptually represent left-hand sides of
403 equations; these are shown by \texttt{print_ss}. During its
404 operation, the simplifier may offer a simplification procedure the
405 current redex and ask for a suitable rewrite rule. Thus rules may be
406 specifically fashioned for particular situations, resulting in a more
407 powerful mechanism than term rewriting by a fixed set of rules.
410 \begin{ttdescription}
412 \item[$ss$ \ttindexbold{addsimprocs} $procs$] adds the simplification
413 procedures $procs$ to the current simpset.
415 \item[$ss$ \ttindexbold{delsimprocs} $procs$] deletes the simplification
416 procedures $procs$ from the current simpset.
420 For example, simplification procedures \ttindexbold{nat_cancel} of
421 \texttt{HOL/Arith} cancel common summands and constant factors out of
422 several relations of sums over natural numbers.
424 Consider the following goal, which after cancelling $a$ on both sides
425 contains a factor of $2$. Simplifying with the simpset of
426 \texttt{Arith.thy} will do the cancellation automatically:
428 {\out 1. x + a + x < y + y + 2 + a + a + a + a + a}
430 {\out 1. x < Suc (a + (a + y))}
434 \subsection{*Congruence rules}\index{congruence rules}\label{sec:simp-congs}
436 addcongs : simpset * thm list -> simpset \hfill{\bf infix 4}
437 delcongs : simpset * thm list -> simpset \hfill{\bf infix 4}
438 addeqcongs : simpset * thm list -> simpset \hfill{\bf infix 4}
439 deleqcongs : simpset * thm list -> simpset \hfill{\bf infix 4}
442 Congruence rules are meta-equalities of the form
444 f(\Var{x@1},\ldots,\Var{x@n}) \equiv f(\Var{y@1},\ldots,\Var{y@n}).
446 This governs the simplification of the arguments of~$f$. For
447 example, some arguments can be simplified under additional assumptions:
448 \[ \List{\Var{P@1} \bimp \Var{Q@1};\; \Var{Q@1} \Imp \Var{P@2} \bimp \Var{Q@2}}
449 \Imp (\Var{P@1} \imp \Var{P@2}) \equiv (\Var{Q@1} \imp \Var{Q@2})
451 Given this rule, the simplifier assumes $Q@1$ and extracts rewrite
452 rules from it when simplifying~$P@2$. Such local assumptions are
453 effective for rewriting formulae such as $x=0\imp y+x=y$. The local
454 assumptions are also provided as theorems to the solver; see
455 \S~\ref{sec:simp-solver} below.
457 \begin{ttdescription}
459 \item[$ss$ \ttindexbold{addcongs} $thms$] adds congruence rules to the
460 simpset $ss$. These are derived from $thms$ in an appropriate way,
461 depending on the underlying object-logic.
463 \item[$ss$ \ttindexbold{delcongs} $thms$] deletes congruence rules
466 \item[$ss$ \ttindexbold{addeqcongs} $thms$] adds congruence rules in
467 their internal form (conclusions using meta-equality) to simpset
468 $ss$. This is the basic mechanism that \texttt{addcongs} is built
469 on. It should be rarely used directly.
471 \item[$ss$ \ttindexbold{deleqcongs} $thms$] deletes congruence rules
472 in internal form from simpset $ss$.
478 Here are some more examples. The congruence rule for bounded
479 quantifiers also supplies contextual information, this time about the
482 &&\List{\Var{A}=\Var{B};\;
483 \Forall x. x\in \Var{B} \Imp \Var{P}(x) = \Var{Q}(x)} \Imp{} \\
485 (\forall x\in \Var{A}.\Var{P}(x)) = (\forall x\in \Var{B}.\Var{Q}(x))
487 The congruence rule for conditional expressions can supply contextual
488 information for simplifying the arms:
489 \[ \List{\Var{p}=\Var{q};~ \Var{q} \Imp \Var{a}=\Var{c};~
490 \lnot\Var{q} \Imp \Var{b}=\Var{d}} \Imp
491 if(\Var{p},\Var{a},\Var{b}) \equiv if(\Var{q},\Var{c},\Var{d})
493 A congruence rule can also {\em prevent\/} simplification of some arguments.
494 Here is an alternative congruence rule for conditional expressions:
495 \[ \Var{p}=\Var{q} \Imp
496 if(\Var{p},\Var{a},\Var{b}) \equiv if(\Var{q},\Var{a},\Var{b})
498 Only the first argument is simplified; the others remain unchanged.
499 This can make simplification much faster, but may require an extra case split
503 \subsection{*The subgoaler}\label{sec:simp-subgoaler}
506 simpset * (simpset -> int -> tactic) -> simpset \hfill{\bf infix 4}
507 prems_of_ss : simpset -> thm list
510 The subgoaler is the tactic used to solve subgoals arising out of
511 conditional rewrite rules or congruence rules. The default should be
512 simplification itself. Occasionally this strategy needs to be
513 changed. For example, if the premise of a conditional rule is an
514 instance of its conclusion, as in $Suc(\Var{m}) < \Var{n} \Imp \Var{m}
515 < \Var{n}$, the default strategy could loop.
517 \begin{ttdescription}
519 \item[$ss$ \ttindexbold{setsubgoaler} $tacf$] sets the subgoaler of
520 $ss$ to $tacf$. The function $tacf$ will be applied to the current
521 simplifier context expressed as a simpset.
523 \item[\ttindexbold{prems_of_ss} $ss$] retrieves the current set of
524 premises from simplifier context $ss$. This may be non-empty only
525 if the simplifier has been told to utilize local assumptions in the
526 first place, e.g.\ if invoked via \texttt{asm_simp_tac}.
530 As an example, consider the following subgoaler:
534 resolve_tac (prems_of_ss ss) ORELSE'
537 This tactic first tries to solve the subgoal by assumption or by
538 resolving with with one of the premises, calling simplification only
542 \subsection{*The solver}\label{sec:simp-solver}
544 mk_solver : string -> (thm list -> int -> tactic) -> solver
545 setSolver : simpset * solver -> simpset \hfill{\bf infix 4}
546 addSolver : simpset * solver -> simpset \hfill{\bf infix 4}
547 setSSolver : simpset * solver -> simpset \hfill{\bf infix 4}
548 addSSolver : simpset * solver -> simpset \hfill{\bf infix 4}
551 A solver is a tactic that attempts to solve a subgoal after
552 simplification. Typically it just proves trivial subgoals such as
553 \texttt{True} and $t=t$. It could use sophisticated means such as {\tt
554 blast_tac}, though that could make simplification expensive.
555 To keep things more abstract, solvers are packaged up in type
556 \texttt{solver}. The only way to create a solver is via \texttt{mk_solver}.
558 Rewriting does not instantiate unknowns. For example, rewriting
559 cannot prove $a\in \Var{A}$ since this requires
560 instantiating~$\Var{A}$. The solver, however, is an arbitrary tactic
561 and may instantiate unknowns as it pleases. This is the only way the
562 simplifier can handle a conditional rewrite rule whose condition
563 contains extra variables. When a simplification tactic is to be
564 combined with other provers, especially with the classical reasoner,
565 it is important whether it can be considered safe or not. For this
566 reason a simpset contains two solvers, a safe and an unsafe one.
568 The standard simplification strategy solely uses the unsafe solver,
569 which is appropriate in most cases. For special applications where
570 the simplification process is not allowed to instantiate unknowns
571 within the goal, simplification starts with the safe solver, but may
572 still apply the ordinary unsafe one in nested simplifications for
573 conditional rules or congruences. Note that in this way the overall
574 tactic is not totally safe: it may instantiate unknowns that appear also
577 \begin{ttdescription}
578 \item[\ttindexbold{mk_solver} $s$ $tacf$] converts $tacf$ into a new solver;
579 the string $s$ is only attached as a comment and has no other significance.
581 \item[$ss$ \ttindexbold{setSSolver} $tacf$] installs $tacf$ as the
582 \emph{safe} solver of $ss$.
584 \item[$ss$ \ttindexbold{addSSolver} $tacf$] adds $tacf$ as an
585 additional \emph{safe} solver; it will be tried after the solvers
586 which had already been present in $ss$.
588 \item[$ss$ \ttindexbold{setSolver} $tacf$] installs $tacf$ as the
589 unsafe solver of $ss$.
591 \item[$ss$ \ttindexbold{addSolver} $tacf$] adds $tacf$ as an
592 additional unsafe solver; it will be tried after the solvers which
593 had already been present in $ss$.
599 \index{assumptions!in simplification} The solver tactic is invoked
600 with a list of theorems, namely assumptions that hold in the local
601 context. This may be non-empty only if the simplifier has been told
602 to utilize local assumptions in the first place, e.g.\ if invoked via
603 \texttt{asm_simp_tac}. The solver is also presented the full goal
604 including its assumptions in any case. Thus it can use these (e.g.\
605 by calling \texttt{assume_tac}), even if the list of premises is not
610 As explained in \S\ref{sec:simp-subgoaler}, the subgoaler is also used
611 to solve the premises of congruence rules. These are usually of the
612 form $s = \Var{x}$, where $s$ needs to be simplified and $\Var{x}$
613 needs to be instantiated with the result. Typically, the subgoaler
614 will invoke the simplifier at some point, which will eventually call
615 the solver. For this reason, solver tactics must be prepared to solve
616 goals of the form $t = \Var{x}$, usually by reflexivity. In
617 particular, reflexivity should be tried before any of the fancy
618 tactics like \texttt{blast_tac}.
620 It may even happen that due to simplification the subgoal is no longer
621 an equality. For example $False \bimp \Var{Q}$ could be rewritten to
622 $\lnot\Var{Q}$. To cover this case, the solver could try resolving
623 with the theorem $\lnot False$.
628 If the simplifier aborts with the message \texttt{Failed congruence
629 proof!}, then the subgoaler or solver has failed to prove a
630 premise of a congruence rule. This should never occur under normal
631 circumstances; it indicates that some congruence rule, or possibly
632 the subgoaler or solver, is faulty.
636 \subsection{*The looper}\label{sec:simp-looper}
638 setloop : simpset * (int -> tactic) -> simpset \hfill{\bf infix 4}
639 addloop : simpset * (string * (int -> tactic)) -> simpset \hfill{\bf infix 4}
640 delloop : simpset * string -> simpset \hfill{\bf infix 4}
641 addsplits : simpset * thm list -> simpset \hfill{\bf infix 4}
642 delsplits : simpset * thm list -> simpset \hfill{\bf infix 4}
645 The looper is a list of tactics that are applied after simplification, in case
646 the solver failed to solve the simplified goal. If the looper
647 succeeds, the simplification process is started all over again. Each
648 of the subgoals generated by the looper is attacked in turn, in
651 A typical looper is \index{case splitting}: the expansion of a conditional.
652 Another possibility is to apply an elimination rule on the
653 assumptions. More adventurous loopers could start an induction.
655 \begin{ttdescription}
657 \item[$ss$ \ttindexbold{setloop} $tacf$] installs $tacf$ as the only looper
660 \item[$ss$ \ttindexbold{addloop} $(name,tacf)$] adds $tacf$ as an additional
661 looper tactic with name $name$; it will be tried after the looper tactics
662 that had already been present in $ss$.
664 \item[$ss$ \ttindexbold{delloop} $name$] deletes the looper tactic $name$
667 \item[$ss$ \ttindexbold{addsplits} $thms$] adds
668 split tactics for $thms$ as additional looper tactics of $ss$.
670 \item[$ss$ \ttindexbold{addsplits} $thms$] deletes the
671 split tactics for $thms$ from the looper tactics of $ss$.
675 The splitter replaces applications of a given function; the right-hand side
676 of the replacement can be anything. For example, here is a splitting rule
677 for conditional expressions:
678 \[ \Var{P}(if(\Var{Q},\Var{x},\Var{y})) \bimp (\Var{Q} \imp \Var{P}(\Var{x}))
679 \conj (\lnot\Var{Q} \imp \Var{P}(\Var{y}))
681 Another example is the elimination operator for Cartesian products (which
682 happens to be called~$split$):
683 \[ \Var{P}(split(\Var{f},\Var{p})) \bimp (\forall a~b. \Var{p} =
684 \langle a,b\rangle \imp \Var{P}(\Var{f}(a,b)))
687 For technical reasons, there is a distinction between case splitting in the
688 conclusion and in the premises of a subgoal. The former is done by
689 \texttt{split_tac} with rules like \texttt{split_if} or \texttt{option.split},
690 which do not split the subgoal, while the latter is done by
691 \texttt{split_asm_tac} with rules like \texttt{split_if_asm} or
692 \texttt{option.split_asm}, which split the subgoal.
693 The operator \texttt{addsplits} automatically takes care of which tactic to
694 call, analyzing the form of the rules given as argument.
696 Due to \texttt{split_asm_tac}, the simplifier may split subgoals!
699 Case splits should be allowed only when necessary; they are expensive
700 and hard to control. Here is an example of use, where \texttt{split_if}
701 is the first rule above:
703 by (simp_tac (simpset()
704 addloop ("split if", split_tac [split_if])) 1);
706 Users would usually prefer the following shortcut using \texttt{addsplits}:
708 by (simp_tac (simpset() addsplits [split_if]) 1);
710 Case-splitting on conditional expressions is usually beneficial, so it is
711 enabled by default in the object-logics \texttt{HOL} and \texttt{FOL}.
714 \section{The simplification tactics}\label{simp-tactics}
715 \index{simplification!tactics}\index{tactics!simplification}
717 generic_simp_tac : bool -> bool * bool * bool ->
718 simpset -> int -> tactic
719 simp_tac : simpset -> int -> tactic
720 asm_simp_tac : simpset -> int -> tactic
721 full_simp_tac : simpset -> int -> tactic
722 asm_full_simp_tac : simpset -> int -> tactic
723 safe_asm_full_simp_tac : simpset -> int -> tactic
726 \texttt{generic_simp_tac} is the basic tactic that is underlying any actual
727 simplification work. The others are just instantiations of it. The rewriting
728 strategy is always strictly bottom up, except for congruence rules,
729 which are applied while descending into a term. Conditions in conditional
730 rewrite rules are solved recursively before the rewrite rule is applied.
732 \begin{ttdescription}
734 \item[\ttindexbold{generic_simp_tac} $safe$ ($simp\_asm$, $use\_asm$, $mutual$)]
735 gives direct access to the various simplification modes:
737 \item if $safe$ is {\tt true}, the safe solver is used as explained in
738 \S\ref{sec:simp-solver},
739 \item $simp\_asm$ determines whether the local assumptions are simplified,
740 \item $use\_asm$ determines whether the assumptions are used as local rewrite
742 \item $mutual$ determines whether assumptions can simplify each other rather
743 than being processed from left to right.
745 This generic interface is intended
746 for building special tools, e.g.\ for combining the simplifier with the
747 classical reasoner. It is rarely used directly.
749 \item[\ttindexbold{simp_tac}, \ttindexbold{asm_simp_tac},
750 \ttindexbold{full_simp_tac}, \ttindexbold{asm_full_simp_tac}] are
751 the basic simplification tactics that work exactly like their
752 namesakes in \S\ref{sec:simp-for-dummies}, except that they are
753 explicitly supplied with a simpset.
759 Local modifications of simpsets within a proof are often much cleaner
760 by using above tactics in conjunction with explicit simpsets, rather
761 than their capitalized counterparts. For example
767 can be expressed more appropriately as
769 by (simp_tac (simpset() addsimps \(thms\)) \(i\));
774 Also note that functions depending implicitly on the current theory
775 context (like capital \texttt{Simp_tac} and the other commands of
776 \S\ref{sec:simp-for-dummies}) should be considered harmful outside of
777 actual proof scripts. In particular, ML programs like theory
778 definition packages or special tactics should refer to simpsets only
779 explicitly, via the above tactics used in conjunction with
780 \texttt{simpset_of} or the \texttt{SIMPSET} tacticals.
783 \section{Forward rules and conversions}
784 \index{simplification!forward rules}\index{simplification!conversions}
785 \begin{ttbox}\index{*simplify}\index{*asm_simplify}\index{*full_simplify}\index{*asm_full_simplify}\index{*Simplifier.rewrite}\index{*Simplifier.asm_rewrite}\index{*Simplifier.full_rewrite}\index{*Simplifier.asm_full_rewrite}
786 simplify : simpset -> thm -> thm
787 asm_simplify : simpset -> thm -> thm
788 full_simplify : simpset -> thm -> thm
789 asm_full_simplify : simpset -> thm -> thm\medskip
790 Simplifier.rewrite : simpset -> cterm -> thm
791 Simplifier.asm_rewrite : simpset -> cterm -> thm
792 Simplifier.full_rewrite : simpset -> cterm -> thm
793 Simplifier.asm_full_rewrite : simpset -> cterm -> thm
796 The first four of these functions provide \emph{forward} rules for
797 simplification. Their effect is analogous to the corresponding
798 tactics described in \S\ref{simp-tactics}, but affect the whole
799 theorem instead of just a certain subgoal. Also note that the
800 looper~/ solver process as described in \S\ref{sec:simp-looper} and
801 \S\ref{sec:simp-solver} is omitted in forward simplification.
803 The latter four are \emph{conversions}, establishing proven equations
804 of the form $t \equiv u$ where the l.h.s.\ $t$ has been given as
808 Forward simplification rules and conversions should be used rarely
809 in ordinary proof scripts. The main intention is to provide an
810 internal interface to the simplifier for special utilities.
814 \section{Examples of using the Simplifier}
815 \index{examples!of simplification} Assume we are working within {\tt
816 FOL} (see the file \texttt{FOL/ex/Nat}) and that
817 \begin{ttdescription}
819 is a theory including the constants $0$, $Suc$ and $+$,
821 is the rewrite rule $0+\Var{n} = \Var{n}$,
823 is the rewrite rule $Suc(\Var{m})+\Var{n} = Suc(\Var{m}+\Var{n})$,
825 is the induction rule $\List{\Var{P}(0);\; \Forall x. \Var{P}(x)\Imp
826 \Var{P}(Suc(x))} \Imp \Var{P}(\Var{n})$.
828 We augment the implicit simpset inherited from \texttt{Nat} with the
829 basic rewrite rules for addition of natural numbers:
831 Addsimps [add_0, add_Suc];
834 \subsection{A trivial example}
835 Proofs by induction typically involve simplification. Here is a proof
836 that~0 is a right identity:
843 The first step is to perform induction on the variable~$m$. This returns a
844 base case and inductive step as two subgoals:
846 by (res_inst_tac [("n","m")] induct 1);
850 {\out 2. !!x. x + 0 = x ==> Suc(x) + 0 = Suc(x)}
852 Simplification solves the first subgoal trivially:
857 {\out 1. !!x. x + 0 = x ==> Suc(x) + 0 = Suc(x)}
859 The remaining subgoal requires \ttindex{Asm_simp_tac} in order to use the
860 induction hypothesis as a rewrite rule:
868 \subsection{An example of tracing}
869 \index{tracing!of simplification|(}\index{*trace_simp}
871 Let us prove a similar result involving more complex terms. We prove
872 that addition is commutative.
874 Goal "m+Suc(n) = Suc(m+n)";
876 {\out m + Suc(n) = Suc(m + n)}
877 {\out 1. m + Suc(n) = Suc(m + n)}
879 Performing induction on~$m$ yields two subgoals:
881 by (res_inst_tac [("n","m")] induct 1);
883 {\out m + Suc(n) = Suc(m + n)}
884 {\out 1. 0 + Suc(n) = Suc(0 + n)}
885 {\out 2. !!x. x + Suc(n) = Suc(x + n) ==>}
886 {\out Suc(x) + Suc(n) = Suc(Suc(x) + n)}
888 Simplification solves the first subgoal, this time rewriting two
893 {\out m + Suc(n) = Suc(m + n)}
894 {\out 1. !!x. x + Suc(n) = Suc(x + n) ==>}
895 {\out Suc(x) + Suc(n) = Suc(Suc(x) + n)}
897 Switching tracing on illustrates how the simplifier solves the remaining
903 {\out Adding rewrite rule:}
904 {\out .x + Suc n == Suc (.x + n)}
906 {\out Applying instance of rewrite rule:}
907 {\out ?m + Suc ?n == Suc (?m + ?n)}
909 {\out Suc .x + Suc n == Suc (Suc .x + n)}
911 {\out Applying instance of rewrite rule:}
912 {\out Suc ?m + ?n == Suc (?m + ?n)}
914 {\out Suc .x + n == Suc (.x + n)}
916 {\out Applying instance of rewrite rule:}
917 {\out Suc ?m + ?n == Suc (?m + ?n)}
919 {\out Suc .x + n == Suc (.x + n)}
921 {\out Applying instance of rewrite rule:}
922 {\out ?x = ?x == True}
924 {\out Suc (Suc (.x + n)) = Suc (Suc (.x + n)) == True}
927 {\out m + Suc(n) = Suc(m + n)}
930 Many variations are possible. At Level~1 (in either example) we could have
931 solved both subgoals at once using the tactical \ttindex{ALLGOALS}:
933 by (ALLGOALS Asm_simp_tac);
935 {\out m + Suc(n) = Suc(m + n)}
938 \index{tracing!of simplification|)}
941 \subsection{Free variables and simplification}
943 Here is a conjecture to be proved for an arbitrary function~$f$
944 satisfying the law $f(Suc(\Var{n})) = Suc(f(\Var{n}))$:
947 "(!!n. f(Suc(n)) = Suc(f(n))) ==> f(i+j) = i+f(j)";
949 {\out f(i + j) = i + f(j)}
950 {\out 1. f(i + j) = i + f(j)}
952 {\out val prem = "f(Suc(?n)) = Suc(f(?n))}
953 {\out [!!n. f(Suc(n)) = Suc(f(n))]" : thm}
955 In the theorem~\texttt{prem}, note that $f$ is a free variable while
956 $\Var{n}$ is a schematic variable.
958 by (res_inst_tac [("n","i")] induct 1);
960 {\out f(i + j) = i + f(j)}
961 {\out 1. f(0 + j) = 0 + f(j)}
962 {\out 2. !!x. f(x + j) = x + f(j) ==> f(Suc(x) + j) = Suc(x) + f(j)}
964 We simplify each subgoal in turn. The first one is trivial:
968 {\out f(i + j) = i + f(j)}
969 {\out 1. !!x. f(x + j) = x + f(j) ==> f(Suc(x) + j) = Suc(x) + f(j)}
971 The remaining subgoal requires rewriting by the premise, so we add it
972 to the current simpset:
974 by (asm_simp_tac (simpset() addsimps [prem]) 1);
976 {\out f(i + j) = i + f(j)}
980 \subsection{Reordering assumptions}
981 \label{sec:reordering-asms}
982 \index{assumptions!reordering}
984 As mentioned in \S\ref{sec:simp-for-dummies-tacs},
985 \ttindex{asm_full_simp_tac} may require the assumptions to be permuted
986 to be more effective. Given the subgoal
988 {\out 1. [| ALL x. P x --> f x = g x; Q(f a); P a; R |] ==> S}
990 we can rotate the assumptions two positions to the right
992 by (rotate_tac ~2 1);
996 {\out 1. [| P a; R; ALL x. P x --> f x = g x; Q(f a) |] ==> S}
998 which enables \verb$asm_full_simp_tac$ to simplify \verb$Q(f a)$ to
999 \verb$Q(g a)$ because now all required assumptions are to the left of
1002 Since rotation alone cannot produce arbitrary permutations, you can also pick
1003 out a particular assumption which needs to be rewritten and move it the the
1004 right end of the assumptions. In the above case rotation can be replaced by
1006 by (dres_inst_tac [("psi","Q(f a)")] asm_rl 1);
1008 which is more directed and leads to
1010 {\out 1. [| ALL x. P x --> f x = g x; P a; R; Q(f a) |] ==> S}
1014 Reordering assumptions usually leads to brittle proofs and should be
1015 avoided. Future versions of \verb$asm_full_simp_tac$ will completely
1016 remove the need for such manipulations.
1020 \section{Permutative rewrite rules}
1021 \index{rewrite rules!permutative|(}
1023 A rewrite rule is {\bf permutative} if the left-hand side and right-hand
1024 side are the same up to renaming of variables. The most common permutative
1025 rule is commutativity: $x+y = y+x$. Other examples include $(x-y)-z =
1026 (x-z)-y$ in arithmetic and $insert(x,insert(y,A)) = insert(y,insert(x,A))$
1027 for sets. Such rules are common enough to merit special attention.
1029 Because ordinary rewriting loops given such rules, the simplifier
1030 employs a special strategy, called {\bf ordered
1031 rewriting}\index{rewriting!ordered}. There is a standard
1032 lexicographic ordering on terms. This should be perfectly OK in most
1033 cases, but can be changed for special applications.
1036 settermless : simpset * (term * term -> bool) -> simpset \hfill{\bf infix 4}
1038 \begin{ttdescription}
1040 \item[$ss$ \ttindexbold{settermless} $rel$] installs relation $rel$ as
1041 term order in simpset $ss$.
1047 A permutative rewrite rule is applied only if it decreases the given
1048 term with respect to this ordering. For example, commutativity
1049 rewrites~$b+a$ to $a+b$, but then stops because $a+b$ is strictly less
1050 than $b+a$. The Boyer-Moore theorem prover~\cite{bm88book} also
1051 employs ordered rewriting.
1053 Permutative rewrite rules are added to simpsets just like other
1054 rewrite rules; the simplifier recognizes their special status
1055 automatically. They are most effective in the case of
1056 associative-commutative operators. (Associativity by itself is not
1057 permutative.) When dealing with an AC-operator~$f$, keep the
1058 following points in mind:
1059 \begin{itemize}\index{associative-commutative operators}
1061 \item The associative law must always be oriented from left to right,
1062 namely $f(f(x,y),z) = f(x,f(y,z))$. The opposite orientation, if
1063 used with commutativity, leads to looping in conjunction with the
1064 standard term order.
1066 \item To complete your set of rewrite rules, you must add not just
1067 associativity~(A) and commutativity~(C) but also a derived rule, {\bf
1068 left-com\-mut\-ativ\-ity} (LC): $f(x,f(y,z)) = f(y,f(x,z))$.
1070 Ordered rewriting with the combination of A, C, and~LC sorts a term
1072 \[\def\maps#1{\stackrel{#1}{\longmapsto}}
1073 (b+c)+a \maps{A} b+(c+a) \maps{C} b+(a+c) \maps{LC} a+(b+c) \]
1074 Martin and Nipkow~\cite{martin-nipkow} discuss the theory and give many
1075 examples; other algebraic structures are amenable to ordered rewriting,
1076 such as boolean rings.
1078 \subsection{Example: sums of natural numbers}
1080 This example is again set in \HOL\ (see \texttt{HOL/ex/NatSum}).
1081 Theory \thydx{Arith} contains natural numbers arithmetic. Its
1082 associated simpset contains many arithmetic laws including
1083 distributivity of~$\times$ over~$+$, while \texttt{add_ac} is a list
1084 consisting of the A, C and LC laws for~$+$ on type \texttt{nat}. Let
1085 us prove the theorem
1086 \[ \sum@{i=1}^n i = n\times(n+1)/2. \]
1088 A functional~\texttt{sum} represents the summation operator under the
1089 interpretation $\texttt{sum} \, f \, (n + 1) = \sum@{i=0}^n f\,i$. We
1090 extend \texttt{Arith} as follows:
1093 consts sum :: [nat=>nat, nat] => nat
1096 "sum f (Suc n) = f(n) + sum f n"
1099 The \texttt{primrec} declaration automatically adds rewrite rules for
1100 \texttt{sum} to the default simpset. We now remove the
1101 \texttt{nat_cancel} simplification procedures (in order not to spoil
1102 the example) and insert the AC-rules for~$+$:
1104 Delsimprocs nat_cancel;
1107 Our desired theorem now reads $\texttt{sum} \, (\lambda i.i) \, (n+1) =
1108 n\times(n+1)/2$. The Isabelle goal has both sides multiplied by~$2$:
1110 Goal "2 * sum (\%i.i) (Suc n) = n * Suc n";
1112 {\out 2 * sum (\%i. i) (Suc n) = n * Suc n}
1113 {\out 1. 2 * sum (\%i. i) (Suc n) = n * Suc n}
1115 Induction should not be applied until the goal is in the simplest
1120 {\out 2 * sum (\%i. i) (Suc n) = n * Suc n}
1121 {\out 1. n + (sum (\%i. i) n + sum (\%i. i) n) = n * n}
1123 Ordered rewriting has sorted the terms in the left-hand side. The
1124 subgoal is now ready for induction:
1126 by (induct_tac "n" 1);
1128 {\out 2 * sum (\%i. i) (Suc n) = n * Suc n}
1129 {\out 1. 0 + (sum (\%i. i) 0 + sum (\%i. i) 0) = 0 * 0}
1131 {\out 2. !!n. n + (sum (\%i. i) n + sum (\%i. i) n) = n * n}
1132 {\out ==> Suc n + (sum (\%i. i) (Suc n) + sum (\%i.\,i) (Suc n)) =}
1133 {\out Suc n * Suc n}
1135 Simplification proves both subgoals immediately:\index{*ALLGOALS}
1137 by (ALLGOALS Asm_simp_tac);
1139 {\out 2 * sum (\%i. i) (Suc n) = n * Suc n}
1142 Simplification cannot prove the induction step if we omit \texttt{add_ac} from
1143 the simpset. Observe that like terms have not been collected:
1146 {\out 2 * sum (\%i. i) (Suc n) = n * Suc n}
1147 {\out 1. !!n. n + sum (\%i. i) n + (n + sum (\%i. i) n) = n + n * n}
1148 {\out ==> n + (n + sum (\%i. i) n) + (n + (n + sum (\%i.\,i) n)) =}
1149 {\out n + (n + (n + n * n))}
1151 Ordered rewriting proves this by sorting the left-hand side. Proving
1152 arithmetic theorems without ordered rewriting requires explicit use of
1153 commutativity. This is tedious; try it and see!
1155 Ordered rewriting is equally successful in proving
1156 $\sum@{i=1}^n i^3 = n^2\times(n+1)^2/4$.
1159 \subsection{Re-orienting equalities}
1160 Ordered rewriting with the derived rule \texttt{symmetry} can reverse
1163 val symmetry = prove_goal HOL.thy "(x=y) = (y=x)"
1164 (fn _ => [Blast_tac 1]);
1166 This is frequently useful. Assumptions of the form $s=t$, where $t$ occurs
1167 in the conclusion but not~$s$, can often be brought into the right form.
1168 For example, ordered rewriting with \texttt{symmetry} can prove the goal
1169 \[ f(a)=b \conj f(a)=c \imp b=c. \]
1170 Here \texttt{symmetry} reverses both $f(a)=b$ and $f(a)=c$
1171 because $f(a)$ is lexicographically greater than $b$ and~$c$. These
1172 re-oriented equations, as rewrite rules, replace $b$ and~$c$ in the
1173 conclusion by~$f(a)$.
1175 Another example is the goal $\lnot(t=u) \imp \lnot(u=t)$.
1176 The differing orientations make this appear difficult to prove. Ordered
1177 rewriting with \texttt{symmetry} makes the equalities agree. (Without
1178 knowing more about~$t$ and~$u$ we cannot say whether they both go to $t=u$
1179 or~$u=t$.) Then the simplifier can prove the goal outright.
1181 \index{rewrite rules!permutative|)}
1184 \section{*Coding simplification procedures}
1186 mk_simproc: string -> cterm list ->
1187 (Sign.sg -> thm list -> term -> thm option) -> simproc
1190 \begin{ttdescription}
1191 \item[\ttindexbold{mk_simproc}~$name$~$lhss$~$proc$] makes $proc$ a
1192 simplification procedure for left-hand side patterns $lhss$. The
1193 name just serves as a comment. The function $proc$ may be invoked
1194 by the simplifier for redex positions matched by one of $lhss$ as
1198 Simplification procedures are applied in a two-stage process as
1199 follows: The simplifier tries to match the current redex position
1200 against any one of the $lhs$ patterns of any simplification procedure.
1201 If this succeeds, it invokes the corresponding {\ML} function, passing
1202 with the current signature, local assumptions and the (potential)
1203 redex. The result may be either \texttt{None} (indicating failure) or
1204 \texttt{Some~$thm$}.
1206 Any successful result is supposed to be a (possibly conditional)
1207 rewrite rule $t \equiv u$ that is applicable to the current redex.
1208 The rule will be applied just as any ordinary rewrite rule. It is
1209 expected to be already in \emph{internal form}, though, bypassing the
1210 automatic preprocessing of object-level equivalences.
1214 As an example of how to write your own simplification procedures,
1215 consider eta-expansion of pair abstraction (see also
1216 \texttt{HOL/Modelcheck/MCSyn} where this is used to provide external
1217 model checker syntax).
1219 The {\HOL} theory of tuples (see \texttt{HOL/Prod}) provides an
1220 operator \texttt{split} together with some concrete syntax supporting
1221 $\lambda\,(x,y).b$ abstractions. Assume that we would like to offer a
1222 tactic that rewrites any function $\lambda\,p.f\,p$ (where $p$ is of
1223 some pair type) to $\lambda\,(x,y).f\,(x,y)$. The corresponding rule
1226 pair_eta_expand: (f::'a*'b=>'c) = (\%(x, y). f (x, y))
1228 Unfortunately, term rewriting using this rule directly would not
1229 terminate! We now use the simplification procedure mechanism in order
1230 to stop the simplifier from applying this rule over and over again,
1231 making it rewrite only actual abstractions. The simplification
1232 procedure \texttt{pair_eta_expand_proc} is defined as follows:
1236 [read_cterm (sign_of Prod.thy)
1237 ("f::'a*'b=>'c", TVar (("'z", 0), []))];
1238 val rew = mk_meta_eq pair_eta_expand; \medskip
1239 fun proc _ _ (Abs _) = Some rew
1240 | proc _ _ _ = None;
1242 val pair_eta_expand_proc = mk_simproc "pair_eta_expand" lhss proc;
1245 This is an example of using \texttt{pair_eta_expand_proc}:
1247 {\out 1. P (\%p::'a * 'a. fst p + snd p + z)}
1248 by (simp_tac (simpset() addsimprocs [pair_eta_expand_proc]) 1);
1249 {\out 1. P (\%(x::'a,y::'a). x + y + z)}
1254 In the above example the simplification procedure just did fine
1255 grained control over rule application, beyond higher-order pattern
1256 matching. Usually, procedures would do some more work, in particular
1257 prove particular theorems depending on the current redex.
1260 \section{*Setting up the Simplifier}\label{sec:setting-up-simp}
1261 \index{simplification!setting up}
1263 Setting up the simplifier for new logics is complicated. This section
1264 describes how the simplifier is installed for intuitionistic
1265 first-order logic; the code is largely taken from {\tt
1266 FOL/simpdata.ML} of the Isabelle sources.
1268 The simplifier and the case splitting tactic, which reside on separate files,
1269 are not part of Pure Isabelle. They must be loaded explicitly by the
1270 object-logic as follows (below \texttt{\~\relax\~\relax} refers to
1271 \texttt{\$ISABELLE_HOME}):
1273 use "\~\relax\~\relax/src/Provers/simplifier.ML";
1274 use "\~\relax\~\relax/src/Provers/splitter.ML";
1277 Simplification requires converting object-equalities to meta-level rewrite
1278 rules. This demands rules stating that equal terms and equivalent formulae
1279 are also equal at the meta-level. The rule declaration part of the file
1280 \texttt{FOL/IFOL.thy} contains the two lines
1281 \begin{ttbox}\index{*eq_reflection theorem}\index{*iff_reflection theorem}
1282 eq_reflection "(x=y) ==> (x==y)"
1283 iff_reflection "(P<->Q) ==> (P==Q)"
1285 Of course, you should only assert such rules if they are true for your
1286 particular logic. In Constructive Type Theory, equality is a ternary
1287 relation of the form $a=b\in A$; the type~$A$ determines the meaning
1288 of the equality essentially as a partial equivalence relation. The
1289 present simplifier cannot be used. Rewriting in \texttt{CTT} uses
1290 another simplifier, which resides in the file {\tt
1291 Provers/typedsimp.ML} and is not documented. Even this does not
1292 work for later variants of Constructive Type Theory that use
1293 intensional equality~\cite{nordstrom90}.
1296 \subsection{A collection of standard rewrite rules}
1298 We first prove lots of standard rewrite rules about the logical
1299 connectives. These include cancellation and associative laws. We
1300 define a function that echoes the desired law and then supplies it the
1301 prover for intuitionistic \FOL:
1303 fun int_prove_fun s =
1305 prove_goal IFOL.thy s
1306 (fn prems => [ (cut_facts_tac prems 1),
1307 (IntPr.fast_tac 1) ]));
1309 The following rewrite rules about conjunction are a selection of those
1310 proved on \texttt{FOL/simpdata.ML}. Later, these will be supplied to the
1313 val conj_simps = map int_prove_fun
1314 ["P & True <-> P", "True & P <-> P",
1315 "P & False <-> False", "False & P <-> False",
1317 "P & ~P <-> False", "~P & P <-> False",
1318 "(P & Q) & R <-> P & (Q & R)"];
1320 The file also proves some distributive laws. As they can cause exponential
1321 blowup, they will not be included in the standard simpset. Instead they
1322 are merely bound to an \ML{} identifier, for user reference.
1324 val distrib_simps = map int_prove_fun
1325 ["P & (Q | R) <-> P&Q | P&R",
1326 "(Q | R) & P <-> Q&P | R&P",
1327 "(P | Q --> R) <-> (P --> R) & (Q --> R)"];
1331 \subsection{Functions for preprocessing the rewrite rules}
1332 \label{sec:setmksimps}
1333 \begin{ttbox}\indexbold{*setmksimps}
1334 setmksimps : simpset * (thm -> thm list) -> simpset \hfill{\bf infix 4}
1336 The next step is to define the function for preprocessing rewrite rules.
1337 This will be installed by calling \texttt{setmksimps} below. Preprocessing
1338 occurs whenever rewrite rules are added, whether by user command or
1339 automatically. Preprocessing involves extracting atomic rewrites at the
1340 object-level, then reflecting them to the meta-level.
1342 To start, the function \texttt{gen_all} strips any meta-level
1343 quantifiers from the front of the given theorem. Usually there are none
1346 fun gen_all th = forall_elim_vars (#maxidx(rep_thm th)+1) th;
1349 The function \texttt{atomize} analyses a theorem in order to extract
1350 atomic rewrite rules. The head of all the patterns, matched by the
1351 wildcard~\texttt{_}, is the coercion function \texttt{Trueprop}.
1353 fun atomize th = case concl_of th of
1354 _ $ (Const("op &",_) $ _ $ _) => atomize(th RS conjunct1) \at
1355 atomize(th RS conjunct2)
1356 | _ $ (Const("op -->",_) $ _ $ _) => atomize(th RS mp)
1357 | _ $ (Const("All",_) $ _) => atomize(th RS spec)
1358 | _ $ (Const("True",_)) => []
1359 | _ $ (Const("False",_)) => []
1362 There are several cases, depending upon the form of the conclusion:
1364 \item Conjunction: extract rewrites from both conjuncts.
1365 \item Implication: convert $P\imp Q$ to the meta-implication $P\Imp Q$ and
1366 extract rewrites from~$Q$; these will be conditional rewrites with the
1368 \item Universal quantification: remove the quantifier, replacing the bound
1369 variable by a schematic variable, and extract rewrites from the body.
1370 \item \texttt{True} and \texttt{False} contain no useful rewrites.
1371 \item Anything else: return the theorem in a singleton list.
1373 The resulting theorems are not literally atomic --- they could be
1374 disjunctive, for example --- but are broken down as much as possible.
1375 See the file \texttt{ZF/simpdata.ML} for a sophisticated translation of
1376 set-theoretic formulae into rewrite rules.
1378 For standard situations like the above,
1379 there is a generic auxiliary function \ttindexbold{mk_atomize} that takes a
1380 list of pairs $(name, thms)$, where $name$ is an operator name and
1381 $thms$ is a list of theorems to resolve with in case the pattern matches,
1382 and returns a suitable \texttt{atomize} function.
1385 The simplified rewrites must now be converted into meta-equalities. The
1386 rule \texttt{eq_reflection} converts equality rewrites, while {\tt
1387 iff_reflection} converts if-and-only-if rewrites. The latter possibility
1388 can arise in two other ways: the negative theorem~$\lnot P$ is converted to
1389 $P\equiv\texttt{False}$, and any other theorem~$P$ is converted to
1390 $P\equiv\texttt{True}$. The rules \texttt{iff_reflection_F} and {\tt
1391 iff_reflection_T} accomplish this conversion.
1393 val P_iff_F = int_prove_fun "~P ==> (P <-> False)";
1394 val iff_reflection_F = P_iff_F RS iff_reflection;
1396 val P_iff_T = int_prove_fun "P ==> (P <-> True)";
1397 val iff_reflection_T = P_iff_T RS iff_reflection;
1399 The function \texttt{mk_eq} converts a theorem to a meta-equality
1400 using the case analysis described above.
1402 fun mk_eq th = case concl_of th of
1403 _ $ (Const("op =",_)$_$_) => th RS eq_reflection
1404 | _ $ (Const("op <->",_)$_$_) => th RS iff_reflection
1405 | _ $ (Const("Not",_)$_) => th RS iff_reflection_F
1406 | _ => th RS iff_reflection_T;
1408 The three functions \texttt{gen_all}, \texttt{atomize} and \texttt{mk_eq}
1409 will be composed together and supplied below to \texttt{setmksimps}.
1412 \subsection{Making the initial simpset}
1414 It is time to assemble these items. We define the infix operator
1415 \ttindex{addcongs} to insert congruence rules; given a list of
1416 theorems, it converts their conclusions into meta-equalities and
1417 passes them to \ttindex{addeqcongs}.
1420 fun ss addcongs congs =
1421 ss addeqcongs (congs RL [eq_reflection,iff_reflection]);
1424 The list \texttt{IFOL_simps} contains the default rewrite rules for
1425 intuitionistic first-order logic. The first of these is the reflexive law
1426 expressed as the equivalence $(a=a)\bimp\texttt{True}$; the rewrite rule $a=a$ is
1430 [refl RS P_iff_T] \at conj_simps \at disj_simps \at not_simps \at
1431 imp_simps \at iff_simps \at quant_simps;
1433 The list \texttt{triv_rls} contains trivial theorems for the solver. Any
1434 subgoal that is simplified to one of these will be removed.
1436 val notFalseI = int_prove_fun "~False";
1437 val triv_rls = [TrueI,refl,iff_refl,notFalseI];
1440 The basic simpset for intuitionistic \FOL{} is
1441 \ttindexbold{FOL_basic_ss}. It preprocess rewrites using {\tt
1442 gen_all}, \texttt{atomize} and \texttt{mk_eq}. It solves simplified
1443 subgoals using \texttt{triv_rls} and assumptions, and by detecting
1444 contradictions. It uses \ttindex{asm_simp_tac} to tackle subgoals of
1445 conditional rewrites.
1447 Other simpsets built from \texttt{FOL_basic_ss} will inherit these items.
1448 In particular, \ttindexbold{IFOL_ss}, which introduces {\tt
1449 IFOL_simps} as rewrite rules. \ttindexbold{FOL_ss} will later
1450 extend \texttt{IFOL_ss} with classical rewrite rules such as $\lnot\lnot
1452 \index{*setmksimps}\index{*setSSolver}\index{*setSolver}\index{*setsubgoaler}
1453 \index{*addsimps}\index{*addcongs}
1455 fun unsafe_solver prems = FIRST'[resolve_tac (triv_rls {\at} prems),
1458 fun safe_solver prems = FIRST'[match_tac (triv_rls {\at} prems),
1459 eq_assume_tac, ematch_tac [FalseE]];
1462 empty_ss setsubgoaler asm_simp_tac
1463 addsimprocs [defALL_regroup, defEX_regroup]
1464 setSSolver safe_solver
1465 setSolver unsafe_solver
1466 setmksimps (map mk_eq o atomize o gen_all);
1469 FOL_basic_ss addsimps (IFOL_simps {\at}
1470 int_ex_simps {\at} int_all_simps)
1471 addcongs [imp_cong];
1473 This simpset takes \texttt{imp_cong} as a congruence rule in order to use
1474 contextual information to simplify the conclusions of implications:
1475 \[ \List{\Var{P}\bimp\Var{P'};\; \Var{P'} \Imp \Var{Q}\bimp\Var{Q'}} \Imp
1476 (\Var{P}\imp\Var{Q}) \bimp (\Var{P'}\imp\Var{Q'})
1478 By adding the congruence rule \texttt{conj_cong}, we could obtain a similar
1479 effect for conjunctions.
1482 \subsection{Splitter setup}\index{simplification!setting up the splitter}
1484 To set up case splitting, we have to call the \ML{} functor \ttindex{
1485 SplitterFun}, which takes the argument signature \texttt{SPLITTER_DATA}.
1486 So we prove the theorem \texttt{meta_eq_to_iff} below and store it, together
1487 with the \texttt{mk_eq} function described above and several standard
1488 theorems, in the structure \texttt{SplitterData}. Calling the functor with
1489 this data yields a new instantiation of the splitter for our logic.
1491 val meta_eq_to_iff = prove_goal IFOL.thy "x==y ==> x<->y"
1492 (fn [prem] => [rewtac prem, rtac iffI 1, atac 1, atac 1]);
1494 structure SplitterData =
1496 structure Simplifier = Simplifier
1498 val meta_eq_to_iff = meta_eq_to_iff
1503 val contrapos = contrapos
1504 val contrapos2 = contrapos2
1505 val notnotD = notnotD
1508 structure Splitter = SplitterFun(SplitterData);
1512 \subsection{Theory setup}\index{simplification!setting up the theory}
1513 \begin{ttbox}\indexbold{*Simplifier.setup}\index{*setup!simplifier}
1514 Simplifier.setup: (theory -> theory) list
1517 Advanced theory related features of the simplifier (e.g.\ implicit
1518 simpset support) have to be set up explicitly. The simplifier already
1519 provides a suitable setup function definition. This has to be
1520 installed into the base theory of any new object-logic via a
1521 \texttt{setup} declaration.
1523 For example, this is done in \texttt{FOL/IFOL.thy} as follows:
1525 setup Simplifier.setup
1529 \index{simplification|)}
1532 %%% Local Variables:
1534 %%% TeX-master: "ref"