doc-src/TutorialI/Misc/simp.thy
author Walther Neuper <neuper@ist.tugraz.at>
Thu, 12 Aug 2010 15:03:34 +0200
branchisac-from-Isabelle2009-2
changeset 37913 20e3616b2d9c
parent 36069 a15454b23ebd
child 41136 7695e4de4d86
permissions -rw-r--r--
prepare reactivation of isac-update-Isa09-2
     1 (*<*)
     2 theory simp imports Main begin
     3 (*>*)
     4 
     5 subsection{*Simplification Rules*}
     6 
     7 text{*\index{simplification rules}
     8 To facilitate simplification,  
     9 the attribute @{text"[simp]"}\index{*simp (attribute)}
    10 declares theorems to be simplification rules, which the simplifier
    11 will use automatically.  In addition, \isacommand{datatype} and
    12 \isacommand{primrec} declarations (and a few others) 
    13 implicitly declare some simplification rules.  
    14 Explicit definitions are \emph{not} declared as 
    15 simplification rules automatically!
    16 
    17 Nearly any theorem can become a simplification
    18 rule. The simplifier will try to transform it into an equation.  
    19 For example, the theorem
    20 @{prop"~P"} is turned into @{prop"P = False"}. The details
    21 are explained in \S\ref{sec:SimpHow}.
    22 
    23 The simplification attribute of theorems can be turned on and off:%
    24 \index{*simp del (attribute)}
    25 \begin{quote}
    26 \isacommand{declare} \textit{theorem-name}@{text"[simp]"}\\
    27 \isacommand{declare} \textit{theorem-name}@{text"[simp del]"}
    28 \end{quote}
    29 Only equations that really simplify, like \isa{rev\
    30 {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs} and
    31 \isa{xs\ {\isacharat}\ {\isacharbrackleft}{\isacharbrackright}\
    32 {\isacharequal}\ xs}, should be declared as default simplification rules. 
    33 More specific ones should only be used selectively and should
    34 not be made default.  Distributivity laws, for example, alter
    35 the structure of terms and can produce an exponential blow-up instead of
    36 simplification.  A default simplification rule may
    37 need to be disabled in certain proofs.  Frequent changes in the simplification
    38 status of a theorem may indicate an unwise use of defaults.
    39 \begin{warn}
    40   Simplification can run forever, for example if both $f(x) = g(x)$ and
    41   $g(x) = f(x)$ are simplification rules. It is the user's responsibility not
    42   to include simplification rules that can lead to nontermination, either on
    43   their own or in combination with other simplification rules.
    44 \end{warn}
    45 \begin{warn}
    46   It is inadvisable to toggle the simplification attribute of a
    47   theorem from a parent theory $A$ in a child theory $B$ for good.
    48   The reason is that if some theory $C$ is based both on $B$ and (via a
    49   different path) on $A$, it is not defined what the simplification attribute
    50   of that theorem will be in $C$: it could be either.
    51 \end{warn}
    52 *} 
    53 
    54 subsection{*The {\tt\slshape simp}  Method*}
    55 
    56 text{*\index{*simp (method)|bold}
    57 The general format of the simplification method is
    58 \begin{quote}
    59 @{text simp} \textit{list of modifiers}
    60 \end{quote}
    61 where the list of \emph{modifiers} fine tunes the behaviour and may
    62 be empty. Specific modifiers are discussed below.  Most if not all of the
    63 proofs seen so far could have been performed
    64 with @{text simp} instead of \isa{auto}, except that @{text simp} attacks
    65 only the first subgoal and may thus need to be repeated --- use
    66 \methdx{simp_all} to simplify all subgoals.
    67 If nothing changes, @{text simp} fails.
    68 *}
    69 
    70 subsection{*Adding and Deleting Simplification Rules*}
    71 
    72 text{*
    73 \index{simplification rules!adding and deleting}%
    74 If a certain theorem is merely needed in a few proofs by simplification,
    75 we do not need to make it a global simplification rule. Instead we can modify
    76 the set of simplification rules used in a simplification step by adding rules
    77 to it and/or deleting rules from it. The two modifiers for this are
    78 \begin{quote}
    79 @{text"add:"} \textit{list of theorem names}\index{*add (modifier)}\\
    80 @{text"del:"} \textit{list of theorem names}\index{*del (modifier)}
    81 \end{quote}
    82 Or you can use a specific list of theorems and omit all others:
    83 \begin{quote}
    84 @{text"only:"} \textit{list of theorem names}\index{*only (modifier)}
    85 \end{quote}
    86 In this example, we invoke the simplifier, adding two distributive
    87 laws:
    88 \begin{quote}
    89 \isacommand{apply}@{text"(simp add: mod_mult_distrib add_mult_distrib)"}
    90 \end{quote}
    91 *}
    92 
    93 subsection{*Assumptions*}
    94 
    95 text{*\index{simplification!with/of assumptions}
    96 By default, assumptions are part of the simplification process: they are used
    97 as simplification rules and are simplified themselves. For example:
    98 *}
    99 
   100 lemma "\<lbrakk> xs @ zs = ys @ xs; [] @ xs = [] @ [] \<rbrakk> \<Longrightarrow> ys = zs"
   101 apply simp
   102 done
   103 
   104 text{*\noindent
   105 The second assumption simplifies to @{term"xs = []"}, which in turn
   106 simplifies the first assumption to @{term"zs = ys"}, thus reducing the
   107 conclusion to @{term"ys = ys"} and hence to @{term"True"}.
   108 
   109 In some cases, using the assumptions can lead to nontermination:
   110 *}
   111 
   112 lemma "\<forall>x. f x = g (f (g x)) \<Longrightarrow> f [] = f [] @ []"
   113 
   114 txt{*\noindent
   115 An unmodified application of @{text"simp"} loops.  The culprit is the
   116 simplification rule @{term"f x = g (f (g x))"}, which is extracted from
   117 the assumption.  (Isabelle notices certain simple forms of
   118 nontermination but not this one.)  The problem can be circumvented by
   119 telling the simplifier to ignore the assumptions:
   120 *}
   121 
   122 apply(simp (no_asm))
   123 done
   124 
   125 text{*\noindent
   126 Three modifiers influence the treatment of assumptions:
   127 \begin{description}
   128 \item[@{text"(no_asm)"}]\index{*no_asm (modifier)}
   129  means that assumptions are completely ignored.
   130 \item[@{text"(no_asm_simp)"}]\index{*no_asm_simp (modifier)}
   131  means that the assumptions are not simplified but
   132   are used in the simplification of the conclusion.
   133 \item[@{text"(no_asm_use)"}]\index{*no_asm_use (modifier)}
   134  means that the assumptions are simplified but are not
   135   used in the simplification of each other or the conclusion.
   136 \end{description}
   137 Only one of the modifiers is allowed, and it must precede all
   138 other modifiers.
   139 %\begin{warn}
   140 %Assumptions are simplified in a left-to-right fashion. If an
   141 %assumption can help in simplifying one to the left of it, this may get
   142 %overlooked. In such cases you have to rotate the assumptions explicitly:
   143 %\isacommand{apply}@ {text"("}\methdx{rotate_tac}~$n$@ {text")"}
   144 %causes a cyclic shift by $n$ positions from right to left, if $n$ is
   145 %positive, and from left to right, if $n$ is negative.
   146 %Beware that such rotations make proofs quite brittle.
   147 %\end{warn}
   148 *}
   149 
   150 subsection{*Rewriting with Definitions*}
   151 
   152 text{*\label{sec:Simp-with-Defs}\index{simplification!with definitions}
   153 Constant definitions (\S\ref{sec:ConstDefinitions}) can be used as
   154 simplification rules, but by default they are not: the simplifier does not
   155 expand them automatically.  Definitions are intended for introducing abstract
   156 concepts and not merely as abbreviations.  Of course, we need to expand
   157 the definition initially, but once we have proved enough abstract properties
   158 of the new constant, we can forget its original definition.  This style makes
   159 proofs more robust: if the definition has to be changed,
   160 only the proofs of the abstract properties will be affected.
   161 
   162 For example, given *}
   163 
   164 definition xor :: "bool \<Rightarrow> bool \<Rightarrow> bool" where
   165 "xor A B \<equiv> (A \<and> \<not>B) \<or> (\<not>A \<and> B)"
   166 
   167 text{*\noindent
   168 we may want to prove
   169 *}
   170 
   171 lemma "xor A (\<not>A)"
   172 
   173 txt{*\noindent
   174 Typically, we begin by unfolding some definitions:
   175 \indexbold{definitions!unfolding}
   176 *}
   177 
   178 apply(simp only: xor_def)
   179 
   180 txt{*\noindent
   181 In this particular case, the resulting goal
   182 @{subgoals[display,indent=0]}
   183 can be proved by simplification. Thus we could have proved the lemma outright by
   184 *}(*<*)oops lemma "xor A (\<not>A)"(*>*)
   185 apply(simp add: xor_def)
   186 (*<*)done(*>*)
   187 text{*\noindent
   188 Of course we can also unfold definitions in the middle of a proof.
   189 
   190 \begin{warn}
   191   If you have defined $f\,x\,y~\isasymequiv~t$ then you can only unfold
   192   occurrences of $f$ with at least two arguments. This may be helpful for unfolding
   193   $f$ selectively, but it may also get in the way. Defining
   194   $f$~\isasymequiv~\isasymlambda$x\,y.\;t$ allows to unfold all occurrences of $f$.
   195 \end{warn}
   196 
   197 There is also the special method \isa{unfold}\index{*unfold (method)|bold}
   198 which merely unfolds
   199 one or several definitions, as in \isacommand{apply}\isa{(unfold xor_def)}.
   200 This is can be useful in situations where \isa{simp} does too much.
   201 Warning: \isa{unfold} acts on all subgoals!
   202 *}
   203 
   204 subsection{*Simplifying {\tt\slshape let}-Expressions*}
   205 
   206 text{*\index{simplification!of \isa{let}-expressions}\index{*let expressions}%
   207 Proving a goal containing \isa{let}-expressions almost invariably requires the
   208 @{text"let"}-con\-structs to be expanded at some point. Since
   209 @{text"let"}\ldots\isa{=}\ldots@{text"in"}{\ldots} is just syntactic sugar for
   210 the predefined constant @{term"Let"}, expanding @{text"let"}-constructs
   211 means rewriting with \tdx{Let_def}: *}
   212 
   213 lemma "(let xs = [] in xs@ys@xs) = ys"
   214 apply(simp add: Let_def)
   215 done
   216 
   217 text{*
   218 If, in a particular context, there is no danger of a combinatorial explosion
   219 of nested @{text"let"}s, you could even simplify with @{thm[source]Let_def} by
   220 default:
   221 *}
   222 declare Let_def [simp]
   223 
   224 subsection{*Conditional Simplification Rules*}
   225 
   226 text{*
   227 \index{conditional simplification rules}%
   228 So far all examples of rewrite rules were equations. The simplifier also
   229 accepts \emph{conditional} equations, for example
   230 *}
   231 
   232 lemma hd_Cons_tl[simp]: "xs \<noteq> []  \<Longrightarrow>  hd xs # tl xs = xs"
   233 apply(case_tac xs, simp, simp)
   234 done
   235 
   236 text{*\noindent
   237 Note the use of ``\ttindexboldpos{,}{$Isar}'' to string together a
   238 sequence of methods. Assuming that the simplification rule
   239 @{term"(rev xs = []) = (xs = [])"}
   240 is present as well,
   241 the lemma below is proved by plain simplification:
   242 *}
   243 
   244 lemma "xs \<noteq> [] \<Longrightarrow> hd(rev xs) # tl(rev xs) = rev xs"
   245 (*<*)
   246 by(simp)
   247 (*>*)
   248 text{*\noindent
   249 The conditional equation @{thm[source]hd_Cons_tl} above
   250 can simplify @{term"hd(rev xs) # tl(rev xs)"} to @{term"rev xs"}
   251 because the corresponding precondition @{term"rev xs ~= []"}
   252 simplifies to @{term"xs ~= []"}, which is exactly the local
   253 assumption of the subgoal.
   254 *}
   255 
   256 
   257 subsection{*Automatic Case Splits*}
   258 
   259 text{*\label{sec:AutoCaseSplits}\indexbold{case splits}%
   260 Goals containing @{text"if"}-expressions\index{*if expressions!splitting of}
   261 are usually proved by case
   262 distinction on the boolean condition.  Here is an example:
   263 *}
   264 
   265 lemma "\<forall>xs. if xs = [] then rev xs = [] else rev xs \<noteq> []"
   266 
   267 txt{*\noindent
   268 The goal can be split by a special method, \methdx{split}:
   269 *}
   270 
   271 apply(split split_if)
   272 
   273 txt{*\noindent
   274 @{subgoals[display,indent=0]}
   275 where \tdx{split_if} is a theorem that expresses splitting of
   276 @{text"if"}s. Because
   277 splitting the @{text"if"}s is usually the right proof strategy, the
   278 simplifier does it automatically.  Try \isacommand{apply}@{text"(simp)"}
   279 on the initial goal above.
   280 
   281 This splitting idea generalizes from @{text"if"} to \sdx{case}.
   282 Let us simplify a case analysis over lists:\index{*list.split (theorem)}
   283 *}(*<*)by simp(*>*)
   284 lemma "(case xs of [] \<Rightarrow> zs | y#ys \<Rightarrow> y#(ys@zs)) = xs@zs"
   285 apply(split list.split)
   286  
   287 txt{*
   288 @{subgoals[display,indent=0]}
   289 The simplifier does not split
   290 @{text"case"}-expressions, as it does @{text"if"}-expressions, 
   291 because with recursive datatypes it could lead to nontermination.
   292 Instead, the simplifier has a modifier
   293 @{text split}\index{*split (modifier)} 
   294 for adding splitting rules explicitly.  The
   295 lemma above can be proved in one step by
   296 *}
   297 (*<*)oops
   298 lemma "(case xs of [] \<Rightarrow> zs | y#ys \<Rightarrow> y#(ys@zs)) = xs@zs"
   299 (*>*)
   300 apply(simp split: list.split)
   301 (*<*)done(*>*)
   302 text{*\noindent
   303 whereas \isacommand{apply}@{text"(simp)"} alone will not succeed.
   304 
   305 Every datatype $t$ comes with a theorem
   306 $t$@{text".split"} which can be declared to be a \bfindex{split rule} either
   307 locally as above, or by giving it the \attrdx{split} attribute globally:
   308 *}
   309 
   310 declare list.split [split]
   311 
   312 text{*\noindent
   313 The @{text"split"} attribute can be removed with the @{text"del"} modifier,
   314 either locally
   315 *}
   316 (*<*)
   317 lemma "dummy=dummy"
   318 (*>*)
   319 apply(simp split del: split_if)
   320 (*<*)
   321 oops
   322 (*>*)
   323 text{*\noindent
   324 or globally:
   325 *}
   326 declare list.split [split del]
   327 
   328 text{*
   329 Polished proofs typically perform splitting within @{text simp} rather than 
   330 invoking the @{text split} method.  However, if a goal contains
   331 several @{text "if"} and @{text case} expressions, 
   332 the @{text split} method can be
   333 helpful in selectively exploring the effects of splitting.
   334 
   335 The split rules shown above are intended to affect only the subgoal's
   336 conclusion.  If you want to split an @{text"if"} or @{text"case"}-expression
   337 in the assumptions, you have to apply \tdx{split_if_asm} or
   338 $t$@{text".split_asm"}: *}
   339 
   340 lemma "if xs = [] then ys \<noteq> [] else ys = [] \<Longrightarrow> xs @ ys \<noteq> []"
   341 apply(split split_if_asm)
   342 
   343 txt{*\noindent
   344 Unlike splitting the conclusion, this step creates two
   345 separate subgoals, which here can be solved by @{text"simp_all"}:
   346 @{subgoals[display,indent=0]}
   347 If you need to split both in the assumptions and the conclusion,
   348 use $t$@{text".splits"} which subsumes $t$@{text".split"} and
   349 $t$@{text".split_asm"}. Analogously, there is @{thm[source]if_splits}.
   350 
   351 \begin{warn}
   352   The simplifier merely simplifies the condition of an 
   353   \isa{if}\index{*if expressions!simplification of} but not the
   354   \isa{then} or \isa{else} parts. The latter are simplified only after the
   355   condition reduces to \isa{True} or \isa{False}, or after splitting. The
   356   same is true for \sdx{case}-expressions: only the selector is
   357   simplified at first, until either the expression reduces to one of the
   358   cases or it is split.
   359 \end{warn}
   360 *}
   361 (*<*)
   362 by(simp_all)
   363 (*>*)
   364 
   365 subsection{*Tracing*}
   366 text{*\indexbold{tracing the simplifier}
   367 Using the simplifier effectively may take a bit of experimentation.  Set the
   368 Proof General flag \pgmenu{Isabelle} $>$ \pgmenu{Settings} $>$ \pgmenu{Trace Simplifier} to get a better idea of what is going on:
   369 *}
   370 
   371 lemma "rev [a] = []"
   372 apply(simp)
   373 (*<*)oops(*>*)
   374 
   375 text{*\noindent
   376 produces the following trace in Proof General's \pgmenu{Trace} buffer:
   377 
   378 \begin{ttbox}\makeatother
   379 [1]Applying instance of rewrite rule "List.rev.simps_2":
   380 rev (?x1 # ?xs1) \(\equiv\) rev ?xs1 @ [?x1]
   381 
   382 [1]Rewriting:
   383 rev [a] \(\equiv\) rev [] @ [a]
   384 
   385 [1]Applying instance of rewrite rule "List.rev.simps_1":
   386 rev [] \(\equiv\) []
   387 
   388 [1]Rewriting:
   389 rev [] \(\equiv\) []
   390 
   391 [1]Applying instance of rewrite rule "List.op @.append_Nil":
   392 [] @ ?y \(\equiv\) ?y
   393 
   394 [1]Rewriting:
   395 [] @ [a] \(\equiv\) [a]
   396 
   397 [1]Applying instance of rewrite rule
   398 ?x2 # ?t1 = ?t1 \(\equiv\) False
   399 
   400 [1]Rewriting:
   401 [a] = [] \(\equiv\) False
   402 \end{ttbox}
   403 The trace lists each rule being applied, both in its general form and
   404 the instance being used. The \texttt{[}$i$\texttt{]} in front (where
   405 above $i$ is always \texttt{1}) indicates that we are inside the $i$th
   406 invocation of the simplifier. Each attempt to apply a
   407 conditional rule shows the rule followed by the trace of the
   408 (recursive!) simplification of the conditions, the latter prefixed by
   409 \texttt{[}$i+1$\texttt{]} instead of \texttt{[}$i$\texttt{]}.
   410 Another source of recursive invocations of the simplifier are
   411 proofs of arithmetic formulae. By default, recursive invocations are not shown,
   412 you must increase the trace depth via \pgmenu{Isabelle} $>$ \pgmenu{Settings} $>$ \pgmenu{Trace Simplifier Depth}.
   413 
   414 Many other hints about the simplifier's actions may appear.
   415 
   416 In more complicated cases, the trace can be very lengthy.  Thus it is
   417 advisable to reset the \pgmenu{Trace Simplifier} flag after having
   418 obtained the desired trace.
   419 Since this is easily forgotten (and may have the unpleasant effect of
   420 swamping the interface with trace information), here is how you can switch
   421 the trace on locally in a proof: *}
   422 
   423 (*<*)lemma "x=x"
   424 (*>*)
   425 using [[trace_simp=true]]
   426 apply simp
   427 (*<*)oops(*>*)
   428 
   429 text{* \noindent
   430 Within the current proof, all simplifications in subsequent proof steps
   431 will be traced, but the text reminds you to remove the \isa{using} clause
   432 after it has done its job. *}
   433 
   434 subsection{*Finding Theorems\label{sec:find}*}
   435 
   436 text{*\indexbold{finding theorems}\indexbold{searching theorems}
   437 Isabelle's large database of proved theorems 
   438 offers a powerful search engine. Its chief limitation is
   439 its restriction to the theories currently loaded.
   440 
   441 \begin{pgnote}
   442 The search engine is started by clicking on Proof General's \pgmenu{Find} icon.
   443 You specify your search textually in the input buffer at the bottom
   444 of the window.
   445 \end{pgnote}
   446 
   447 The simplest form of search finds theorems containing specified
   448 patterns.  A pattern can be any term (even
   449 a single identifier).  It may contain ``\texttt{\_}'', a wildcard standing
   450 for any term. Here are some
   451 examples:
   452 \begin{ttbox}
   453 length
   454 "_ # _ = _ # _"
   455 "_ + _"
   456 "_ * (_ - (_::nat))"
   457 \end{ttbox}
   458 Specifying types, as shown in the last example, 
   459 constrains searches involving overloaded operators.
   460 
   461 \begin{warn}
   462 Always use ``\texttt{\_}'' rather than variable names: searching for
   463 \texttt{"x + y"} will usually not find any matching theorems
   464 because they would need to contain \texttt{x} and~\texttt{y} literally.
   465 When searching for infix operators, do not just type in the symbol,
   466 such as~\texttt{+}, but a proper term such as \texttt{"_ + _"}.
   467 This remark applies to more complicated syntaxes, too.
   468 \end{warn}
   469 
   470 If you are looking for rewrite rules (possibly conditional) that could
   471 simplify some term, prefix the pattern with \texttt{simp:}.
   472 \begin{ttbox}
   473 simp: "_ * (_ + _)"
   474 \end{ttbox}
   475 This finds \emph{all} equations---not just those with a \isa{simp} attribute---whose conclusion has the form
   476 @{text[display]"_ * (_ + _) = \<dots>"}
   477 It only finds equations that can simplify the given pattern
   478 at the root, not somewhere inside: for example, equations of the form
   479 @{text"_ + _ = \<dots>"} do not match.
   480 
   481 You may also search for theorems by name---you merely
   482 need to specify a substring. For example, you could search for all
   483 commutativity theorems like this:
   484 \begin{ttbox}
   485 name: comm
   486 \end{ttbox}
   487 This retrieves all theorems whose name contains \texttt{comm}.
   488 
   489 Search criteria can also be negated by prefixing them with ``\texttt{-}''.
   490 For example,
   491 \begin{ttbox}
   492 -name: List
   493 \end{ttbox}
   494 finds theorems whose name does not contain \texttt{List}. You can use this
   495 to exclude particular theories from the search: the long name of
   496 a theorem contains the name of the theory it comes from.
   497 
   498 Finallly, different search criteria can be combined arbitrarily. 
   499 The effect is conjuctive: Find returns the theorems that satisfy all of
   500 the criteria. For example,
   501 \begin{ttbox}
   502 "_ + _"  -"_ - _"  -simp: "_ * (_ + _)"  name: assoc
   503 \end{ttbox}
   504 looks for theorems containing plus but not minus, and which do not simplify
   505 \mbox{@{text"_ * (_ + _)"}} at the root, and whose name contains \texttt{assoc}.
   506 
   507 Further search criteria are explained in \S\ref{sec:find2}.
   508 
   509 \begin{pgnote}
   510 Proof General keeps a history of all your search expressions.
   511 If you click on \pgmenu{Find}, you can use the arrow keys to scroll
   512 through previous searches and just modify them. This saves you having
   513 to type in lengthy expressions again and again.
   514 \end{pgnote}
   515 *}
   516 (*<*)
   517 end
   518 (*>*)