nipkow@9932: (*<*) haftmann@16417: theory simp imports Main begin nipkow@9932: (*>*) wenzelm@9922: nipkow@11214: subsection{*Simplification Rules*} wenzelm@9922: paulson@11458: text{*\index{simplification rules} paulson@11458: To facilitate simplification, paulson@11458: the attribute @{text"[simp]"}\index{*simp (attribute)} paulson@11458: declares theorems to be simplification rules, which the simplifier paulson@11458: will use automatically. In addition, \isacommand{datatype} and paulson@11458: \isacommand{primrec} declarations (and a few others) paulson@11458: implicitly declare some simplification rules. paulson@11458: Explicit definitions are \emph{not} declared as nipkow@9932: simplification rules automatically! nipkow@9932: paulson@11458: Nearly any theorem can become a simplification paulson@11458: rule. The simplifier will try to transform it into an equation. paulson@11458: For example, the theorem paulson@11458: @{prop"~P"} is turned into @{prop"P = False"}. The details nipkow@9932: are explained in \S\ref{sec:SimpHow}. nipkow@9932: paulson@11458: The simplification attribute of theorems can be turned on and off:% nipkow@12489: \index{*simp del (attribute)} nipkow@9932: \begin{quote} nipkow@9932: \isacommand{declare} \textit{theorem-name}@{text"[simp]"}\\ nipkow@9932: \isacommand{declare} \textit{theorem-name}@{text"[simp del]"} nipkow@9932: \end{quote} paulson@11309: Only equations that really simplify, like \isa{rev\ paulson@11309: {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs} and paulson@11309: \isa{xs\ {\isacharat}\ {\isacharbrackleft}{\isacharbrackright}\ paulson@11309: {\isacharequal}\ xs}, should be declared as default simplification rules. paulson@11309: More specific ones should only be used selectively and should paulson@11309: not be made default. Distributivity laws, for example, alter paulson@11309: the structure of terms and can produce an exponential blow-up instead of paulson@11309: simplification. A default simplification rule may paulson@11309: need to be disabled in certain proofs. Frequent changes in the simplification paulson@11309: status of a theorem may indicate an unwise use of defaults. nipkow@9932: \begin{warn} paulson@11458: Simplification can run forever, for example if both $f(x) = g(x)$ and nipkow@9932: $g(x) = f(x)$ are simplification rules. It is the user's responsibility not nipkow@9932: to include simplification rules that can lead to nontermination, either on nipkow@9932: their own or in combination with other simplification rules. nipkow@9932: \end{warn} nipkow@12332: \begin{warn} nipkow@12332: It is inadvisable to toggle the simplification attribute of a nipkow@12332: theorem from a parent theory $A$ in a child theory $B$ for good. nipkow@12332: The reason is that if some theory $C$ is based both on $B$ and (via a nipkow@13814: different path) on $A$, it is not defined what the simplification attribute nipkow@12332: of that theorem will be in $C$: it could be either. nipkow@12332: \end{warn} paulson@11458: *} nipkow@9932: paulson@11458: subsection{*The {\tt\slshape simp} Method*} nipkow@9932: nipkow@9932: text{*\index{*simp (method)|bold} nipkow@9932: The general format of the simplification method is nipkow@9932: \begin{quote} nipkow@9932: @{text simp} \textit{list of modifiers} nipkow@9932: \end{quote} paulson@10795: where the list of \emph{modifiers} fine tunes the behaviour and may paulson@11309: be empty. Specific modifiers are discussed below. Most if not all of the paulson@11309: proofs seen so far could have been performed nipkow@9932: with @{text simp} instead of \isa{auto}, except that @{text simp} attacks nipkow@10971: only the first subgoal and may thus need to be repeated --- use paulson@11428: \methdx{simp_all} to simplify all subgoals. paulson@11458: If nothing changes, @{text simp} fails. nipkow@9932: *} nipkow@9932: nipkow@11214: subsection{*Adding and Deleting Simplification Rules*} nipkow@9932: nipkow@9932: text{* paulson@11458: \index{simplification rules!adding and deleting}% nipkow@9932: If a certain theorem is merely needed in a few proofs by simplification, nipkow@9932: we do not need to make it a global simplification rule. Instead we can modify nipkow@9932: the set of simplification rules used in a simplification step by adding rules nipkow@9932: to it and/or deleting rules from it. The two modifiers for this are nipkow@9932: \begin{quote} paulson@11458: @{text"add:"} \textit{list of theorem names}\index{*add (modifier)}\\ paulson@11458: @{text"del:"} \textit{list of theorem names}\index{*del (modifier)} nipkow@9932: \end{quote} paulson@11458: Or you can use a specific list of theorems and omit all others: nipkow@9932: \begin{quote} paulson@11458: @{text"only:"} \textit{list of theorem names}\index{*only (modifier)} nipkow@9932: \end{quote} paulson@11309: In this example, we invoke the simplifier, adding two distributive paulson@11309: laws: paulson@11309: \begin{quote} paulson@11309: \isacommand{apply}@{text"(simp add: mod_mult_distrib add_mult_distrib)"} paulson@11309: \end{quote} nipkow@9932: *} nipkow@9932: nipkow@11214: subsection{*Assumptions*} nipkow@9932: nipkow@9932: text{*\index{simplification!with/of assumptions} nipkow@9932: By default, assumptions are part of the simplification process: they are used nipkow@9932: as simplification rules and are simplified themselves. For example: nipkow@9932: *} nipkow@9932: wenzelm@12631: lemma "\ xs @ zs = ys @ xs; [] @ xs = [] @ [] \ \ ys = zs" wenzelm@12631: apply simp nipkow@10171: done nipkow@9932: nipkow@9932: text{*\noindent nipkow@9932: The second assumption simplifies to @{term"xs = []"}, which in turn nipkow@9932: simplifies the first assumption to @{term"zs = ys"}, thus reducing the nipkow@9932: conclusion to @{term"ys = ys"} and hence to @{term"True"}. nipkow@9932: paulson@11458: In some cases, using the assumptions can lead to nontermination: nipkow@9932: *} nipkow@9932: wenzelm@12631: lemma "\x. f x = g (f (g x)) \ f [] = f [] @ []" nipkow@9932: nipkow@9932: txt{*\noindent paulson@11458: An unmodified application of @{text"simp"} loops. The culprit is the paulson@11458: simplification rule @{term"f x = g (f (g x))"}, which is extracted from paulson@11458: the assumption. (Isabelle notices certain simple forms of paulson@11458: nontermination but not this one.) The problem can be circumvented by paulson@11458: telling the simplifier to ignore the assumptions: nipkow@9932: *} nipkow@9932: wenzelm@12631: apply(simp (no_asm)) nipkow@10171: done nipkow@9932: nipkow@9932: text{*\noindent paulson@11458: Three modifiers influence the treatment of assumptions: nipkow@9932: \begin{description} paulson@11458: \item[@{text"(no_asm)"}]\index{*no_asm (modifier)} nipkow@9932: means that assumptions are completely ignored. paulson@11458: \item[@{text"(no_asm_simp)"}]\index{*no_asm_simp (modifier)} nipkow@9932: means that the assumptions are not simplified but nipkow@9932: are used in the simplification of the conclusion. paulson@11458: \item[@{text"(no_asm_use)"}]\index{*no_asm_use (modifier)} nipkow@9932: means that the assumptions are simplified but are not nipkow@9932: used in the simplification of each other or the conclusion. nipkow@9932: \end{description} paulson@11458: Only one of the modifiers is allowed, and it must precede all paulson@11309: other modifiers. nipkow@13623: %\begin{warn} nipkow@13623: %Assumptions are simplified in a left-to-right fashion. If an nipkow@13623: %assumption can help in simplifying one to the left of it, this may get nipkow@13623: %overlooked. In such cases you have to rotate the assumptions explicitly: nipkow@13623: %\isacommand{apply}@ {text"("}\methdx{rotate_tac}~$n$@ {text")"} nipkow@13623: %causes a cyclic shift by $n$ positions from right to left, if $n$ is nipkow@13623: %positive, and from left to right, if $n$ is negative. nipkow@13623: %Beware that such rotations make proofs quite brittle. nipkow@13623: %\end{warn} nipkow@9932: *} nipkow@9932: nipkow@11214: subsection{*Rewriting with Definitions*} nipkow@9932: nipkow@11215: text{*\label{sec:Simp-with-Defs}\index{simplification!with definitions} paulson@11458: Constant definitions (\S\ref{sec:ConstDefinitions}) can be used as paulson@11458: simplification rules, but by default they are not: the simplifier does not paulson@11458: expand them automatically. Definitions are intended for introducing abstract wenzelm@12582: concepts and not merely as abbreviations. Of course, we need to expand paulson@11458: the definition initially, but once we have proved enough abstract properties paulson@11458: of the new constant, we can forget its original definition. This style makes paulson@11458: proofs more robust: if the definition has to be changed, paulson@11458: only the proofs of the abstract properties will be affected. paulson@11458: paulson@11458: For example, given *} nipkow@9932: nipkow@27027: definition xor :: "bool \ bool \ bool" where nipkow@27027: "xor A B \ (A \ \B) \ (\A \ B)" nipkow@9932: nipkow@9932: text{*\noindent nipkow@9932: we may want to prove nipkow@9932: *} nipkow@9932: wenzelm@12631: lemma "xor A (\A)" nipkow@9932: nipkow@9932: txt{*\noindent paulson@11428: Typically, we begin by unfolding some definitions: paulson@11428: \indexbold{definitions!unfolding} nipkow@9932: *} nipkow@9932: wenzelm@12631: apply(simp only: xor_def) nipkow@9932: nipkow@9932: txt{*\noindent nipkow@9932: In this particular case, the resulting goal nipkow@10362: @{subgoals[display,indent=0]} nipkow@10171: can be proved by simplification. Thus we could have proved the lemma outright by wenzelm@12631: *}(*<*)oops lemma "xor A (\A)"(*>*) nipkow@10788: apply(simp add: xor_def) nipkow@10171: (*<*)done(*>*) nipkow@9932: text{*\noindent nipkow@9932: Of course we can also unfold definitions in the middle of a proof. nipkow@9932: nipkow@9932: \begin{warn} nipkow@10971: If you have defined $f\,x\,y~\isasymequiv~t$ then you can only unfold nipkow@10971: occurrences of $f$ with at least two arguments. This may be helpful for unfolding nipkow@10971: $f$ selectively, but it may also get in the way. Defining nipkow@10971: $f$~\isasymequiv~\isasymlambda$x\,y.\;t$ allows to unfold all occurrences of $f$. nipkow@9932: \end{warn} nipkow@12473: nipkow@12473: There is also the special method \isa{unfold}\index{*unfold (method)|bold} nipkow@12473: which merely unfolds nipkow@12473: one or several definitions, as in \isacommand{apply}\isa{(unfold xor_def)}. nipkow@12473: This is can be useful in situations where \isa{simp} does too much. nipkow@12533: Warning: \isa{unfold} acts on all subgoals! nipkow@9932: *} nipkow@9932: nipkow@11214: subsection{*Simplifying {\tt\slshape let}-Expressions*} nipkow@9932: paulson@11458: text{*\index{simplification!of \isa{let}-expressions}\index{*let expressions}% paulson@11458: Proving a goal containing \isa{let}-expressions almost invariably requires the paulson@11458: @{text"let"}-con\-structs to be expanded at some point. Since paulson@11458: @{text"let"}\ldots\isa{=}\ldots@{text"in"}{\ldots} is just syntactic sugar for paulson@11458: the predefined constant @{term"Let"}, expanding @{text"let"}-constructs paulson@11458: means rewriting with \tdx{Let_def}: *} nipkow@9932: wenzelm@12631: lemma "(let xs = [] in xs@ys@xs) = ys" wenzelm@12631: apply(simp add: Let_def) nipkow@10171: done nipkow@9932: nipkow@9932: text{* nipkow@9932: If, in a particular context, there is no danger of a combinatorial explosion paulson@11458: of nested @{text"let"}s, you could even simplify with @{thm[source]Let_def} by nipkow@9932: default: nipkow@9932: *} nipkow@9932: declare Let_def [simp] nipkow@9932: paulson@11458: subsection{*Conditional Simplification Rules*} nipkow@9932: nipkow@9932: text{* paulson@11458: \index{conditional simplification rules}% nipkow@9932: So far all examples of rewrite rules were equations. The simplifier also nipkow@9932: accepts \emph{conditional} equations, for example nipkow@9932: *} nipkow@9932: wenzelm@12631: lemma hd_Cons_tl[simp]: "xs \ [] \ hd xs # tl xs = xs" wenzelm@12631: apply(case_tac xs, simp, simp) nipkow@10171: done nipkow@9932: nipkow@9932: text{*\noindent nipkow@9932: Note the use of ``\ttindexboldpos{,}{$Isar}'' to string together a nipkow@9932: sequence of methods. Assuming that the simplification rule nipkow@9932: @{term"(rev xs = []) = (xs = [])"} nipkow@9932: is present as well, paulson@10795: the lemma below is proved by plain simplification: nipkow@9932: *} nipkow@9932: wenzelm@12631: lemma "xs \ [] \ hd(rev xs) # tl(rev xs) = rev xs" nipkow@9932: (*<*) wenzelm@12631: by(simp) nipkow@9932: (*>*) nipkow@9932: text{*\noindent paulson@10795: The conditional equation @{thm[source]hd_Cons_tl} above nipkow@9932: can simplify @{term"hd(rev xs) # tl(rev xs)"} to @{term"rev xs"} nipkow@9932: because the corresponding precondition @{term"rev xs ~= []"} nipkow@9932: simplifies to @{term"xs ~= []"}, which is exactly the local nipkow@9932: assumption of the subgoal. nipkow@9932: *} nipkow@9932: nipkow@9932: nipkow@11214: subsection{*Automatic Case Splits*} nipkow@9932: paulson@11458: text{*\label{sec:AutoCaseSplits}\indexbold{case splits}% paulson@11458: Goals containing @{text"if"}-expressions\index{*if expressions!splitting of} paulson@11458: are usually proved by case paulson@11458: distinction on the boolean condition. Here is an example: nipkow@9932: *} nipkow@9932: wenzelm@12631: lemma "\xs. if xs = [] then rev xs = [] else rev xs \ []" nipkow@9932: nipkow@9932: txt{*\noindent paulson@11458: The goal can be split by a special method, \methdx{split}: nipkow@9932: *} nipkow@9932: nipkow@10654: apply(split split_if) nipkow@9932: nipkow@10362: txt{*\noindent nipkow@10362: @{subgoals[display,indent=0]} paulson@11428: where \tdx{split_if} is a theorem that expresses splitting of nipkow@10654: @{text"if"}s. Because paulson@11458: splitting the @{text"if"}s is usually the right proof strategy, the paulson@11458: simplifier does it automatically. Try \isacommand{apply}@{text"(simp)"} nipkow@9932: on the initial goal above. nipkow@9932: paulson@11428: This splitting idea generalizes from @{text"if"} to \sdx{case}. paulson@11458: Let us simplify a case analysis over lists:\index{*list.split (theorem)} nipkow@10654: *}(*<*)by simp(*>*) wenzelm@12631: lemma "(case xs of [] \ zs | y#ys \ y#(ys@zs)) = xs@zs" wenzelm@12631: apply(split list.split) paulson@11309: nipkow@10362: txt{* nipkow@10362: @{subgoals[display,indent=0]} paulson@11458: The simplifier does not split paulson@11458: @{text"case"}-expressions, as it does @{text"if"}-expressions, paulson@11458: because with recursive datatypes it could lead to nontermination. paulson@11458: Instead, the simplifier has a modifier paulson@11494: @{text split}\index{*split (modifier)} paulson@11458: for adding splitting rules explicitly. The paulson@11458: lemma above can be proved in one step by nipkow@9932: *} wenzelm@12631: (*<*)oops wenzelm@12631: lemma "(case xs of [] \ zs | y#ys \ y#(ys@zs)) = xs@zs" nipkow@9932: (*>*) wenzelm@12631: apply(simp split: list.split) nipkow@10171: (*<*)done(*>*) nipkow@10654: text{*\noindent nipkow@10654: whereas \isacommand{apply}@{text"(simp)"} alone will not succeed. nipkow@9932: paulson@11458: Every datatype $t$ comes with a theorem nipkow@9932: $t$@{text".split"} which can be declared to be a \bfindex{split rule} either paulson@11458: locally as above, or by giving it the \attrdx{split} attribute globally: nipkow@9932: *} nipkow@9932: nipkow@9932: declare list.split [split] nipkow@9932: nipkow@9932: text{*\noindent nipkow@9932: The @{text"split"} attribute can be removed with the @{text"del"} modifier, nipkow@9932: either locally nipkow@9932: *} nipkow@9932: (*<*) wenzelm@12631: lemma "dummy=dummy" nipkow@9932: (*>*) wenzelm@12631: apply(simp split del: split_if) nipkow@9932: (*<*) wenzelm@12631: oops nipkow@9932: (*>*) nipkow@9932: text{*\noindent nipkow@9932: or globally: nipkow@9932: *} nipkow@9932: declare list.split [split del] nipkow@9932: nipkow@9932: text{* paulson@11458: Polished proofs typically perform splitting within @{text simp} rather than paulson@11458: invoking the @{text split} method. However, if a goal contains wenzelm@19792: several @{text "if"} and @{text case} expressions, paulson@11458: the @{text split} method can be nipkow@10654: helpful in selectively exploring the effects of splitting. nipkow@10654: paulson@11458: The split rules shown above are intended to affect only the subgoal's paulson@11458: conclusion. If you want to split an @{text"if"} or @{text"case"}-expression paulson@11458: in the assumptions, you have to apply \tdx{split_if_asm} or paulson@11458: $t$@{text".split_asm"}: *} nipkow@9932: nipkow@10654: lemma "if xs = [] then ys \ [] else ys = [] \ xs @ ys \ []" nipkow@10654: apply(split split_if_asm) nipkow@9932: nipkow@10362: txt{*\noindent paulson@11458: Unlike splitting the conclusion, this step creates two paulson@11458: separate subgoals, which here can be solved by @{text"simp_all"}: nipkow@10362: @{subgoals[display,indent=0]} nipkow@9932: If you need to split both in the assumptions and the conclusion, nipkow@9932: use $t$@{text".splits"} which subsumes $t$@{text".split"} and nipkow@9932: $t$@{text".split_asm"}. Analogously, there is @{thm[source]if_splits}. nipkow@9932: nipkow@9932: \begin{warn} paulson@11458: The simplifier merely simplifies the condition of an paulson@11458: \isa{if}\index{*if expressions!simplification of} but not the nipkow@9932: \isa{then} or \isa{else} parts. The latter are simplified only after the nipkow@9932: condition reduces to \isa{True} or \isa{False}, or after splitting. The paulson@11428: same is true for \sdx{case}-expressions: only the selector is nipkow@9932: simplified at first, until either the expression reduces to one of the nipkow@9932: cases or it is split. paulson@11458: \end{warn} nipkow@9932: *} nipkow@10362: (*<*) nipkow@10362: by(simp_all) nipkow@10362: (*>*) nipkow@9932: nipkow@11214: subsection{*Tracing*} nipkow@9932: text{*\indexbold{tracing the simplifier} nipkow@9932: Using the simplifier effectively may take a bit of experimentation. Set the nipkow@16523: Proof General flag \pgmenu{Isabelle} $>$ \pgmenu{Settings} $>$ \pgmenu{Trace Simplifier} to get a better idea of what is going on: nipkow@9932: *} nipkow@9932: wenzelm@12631: lemma "rev [a] = []" wenzelm@12631: apply(simp) nipkow@9932: (*<*)oops(*>*) nipkow@9932: nipkow@9932: text{*\noindent nipkow@16523: produces the following trace in Proof General's \pgmenu{Trace} buffer: nipkow@9932: nipkow@9932: \begin{ttbox}\makeatother nipkow@16518: [1]Applying instance of rewrite rule "List.rev.simps_2": nipkow@16359: rev (?x1 # ?xs1) \(\equiv\) rev ?xs1 @ [?x1] nipkow@16359: nipkow@16518: [1]Rewriting: nipkow@16359: rev [a] \(\equiv\) rev [] @ [a] nipkow@16359: nipkow@16518: [1]Applying instance of rewrite rule "List.rev.simps_1": nipkow@16359: rev [] \(\equiv\) [] nipkow@16359: nipkow@16518: [1]Rewriting: nipkow@16359: rev [] \(\equiv\) [] nipkow@16359: nipkow@16518: [1]Applying instance of rewrite rule "List.op @.append_Nil": nipkow@16359: [] @ ?y \(\equiv\) ?y nipkow@16359: nipkow@16518: [1]Rewriting: nipkow@16359: [] @ [a] \(\equiv\) [a] nipkow@16359: nipkow@16518: [1]Applying instance of rewrite rule nipkow@16359: ?x2 # ?t1 = ?t1 \(\equiv\) False nipkow@16359: nipkow@16518: [1]Rewriting: nipkow@16359: [a] = [] \(\equiv\) False nipkow@9932: \end{ttbox} nipkow@16359: The trace lists each rule being applied, both in its general form and nipkow@16359: the instance being used. The \texttt{[}$i$\texttt{]} in front (where nipkow@16518: above $i$ is always \texttt{1}) indicates that we are inside the $i$th nipkow@16518: invocation of the simplifier. Each attempt to apply a nipkow@16359: conditional rule shows the rule followed by the trace of the nipkow@16359: (recursive!) simplification of the conditions, the latter prefixed by nipkow@16359: \texttt{[}$i+1$\texttt{]} instead of \texttt{[}$i$\texttt{]}. nipkow@16359: Another source of recursive invocations of the simplifier are nipkow@35992: proofs of arithmetic formulae. By default, recursive invocations are not shown, nipkow@35992: you must increase the trace depth via \pgmenu{Isabelle} $>$ \pgmenu{Settings} $>$ \pgmenu{Trace Simplifier Depth}. nipkow@9932: nipkow@16359: Many other hints about the simplifier's actions may appear. paulson@11309: nipkow@16359: In more complicated cases, the trace can be very lengthy. Thus it is nipkow@16523: advisable to reset the \pgmenu{Trace Simplifier} flag after having nipkow@35992: obtained the desired trace. nipkow@36069: Since this is easily forgotten (and may have the unpleasant effect of nipkow@36069: swamping the interface with trace information), here is how you can switch nipkow@36069: the trace on locally in a proof: *} nipkow@36069: nipkow@36069: (*<*)lemma "x=x" nipkow@36069: (*>*) nipkow@36069: using [[trace_simp=true]] nipkow@36069: apply simp nipkow@36069: (*<*)oops(*>*) nipkow@36069: nipkow@36069: text{* \noindent nipkow@36069: Within the current proof, all simplifications in subsequent proof steps nipkow@36069: will be traced, but the text reminds you to remove the \isa{using} clause nipkow@36069: after it has done its job. *} nipkow@9932: nipkow@16544: subsection{*Finding Theorems\label{sec:find}*} nipkow@16518: nipkow@16523: text{*\indexbold{finding theorems}\indexbold{searching theorems} paulson@16560: Isabelle's large database of proved theorems paulson@16560: offers a powerful search engine. Its chief limitation is nipkow@16518: its restriction to the theories currently loaded. nipkow@16518: nipkow@16518: \begin{pgnote} nipkow@16523: The search engine is started by clicking on Proof General's \pgmenu{Find} icon. nipkow@16518: You specify your search textually in the input buffer at the bottom nipkow@16518: of the window. nipkow@16518: \end{pgnote} nipkow@16518: paulson@16560: The simplest form of search finds theorems containing specified paulson@16560: patterns. A pattern can be any term (even paulson@16560: a single identifier). It may contain ``\texttt{\_}'', a wildcard standing paulson@16560: for any term. Here are some nipkow@16518: examples: nipkow@16518: \begin{ttbox} nipkow@16518: length nipkow@16518: "_ # _ = _ # _" nipkow@16518: "_ + _" nipkow@16518: "_ * (_ - (_::nat))" nipkow@16518: \end{ttbox} paulson@16560: Specifying types, as shown in the last example, paulson@16560: constrains searches involving overloaded operators. nipkow@16518: nipkow@16518: \begin{warn} nipkow@16518: Always use ``\texttt{\_}'' rather than variable names: searching for nipkow@16518: \texttt{"x + y"} will usually not find any matching theorems paulson@16560: because they would need to contain \texttt{x} and~\texttt{y} literally. paulson@16560: When searching for infix operators, do not just type in the symbol, paulson@16560: such as~\texttt{+}, but a proper term such as \texttt{"_ + _"}. paulson@16560: This remark applies to more complicated syntaxes, too. nipkow@16518: \end{warn} nipkow@16518: paulson@16560: If you are looking for rewrite rules (possibly conditional) that could paulson@16560: simplify some term, prefix the pattern with \texttt{simp:}. nipkow@16518: \begin{ttbox} nipkow@16518: simp: "_ * (_ + _)" nipkow@16518: \end{ttbox} paulson@16560: This finds \emph{all} equations---not just those with a \isa{simp} attribute---whose conclusion has the form nipkow@16518: @{text[display]"_ * (_ + _) = \"} paulson@16560: It only finds equations that can simplify the given pattern paulson@16560: at the root, not somewhere inside: for example, equations of the form paulson@16560: @{text"_ + _ = \"} do not match. nipkow@16518: paulson@16560: You may also search for theorems by name---you merely paulson@16560: need to specify a substring. For example, you could search for all nipkow@16518: commutativity theorems like this: nipkow@16518: \begin{ttbox} nipkow@16518: name: comm nipkow@16518: \end{ttbox} nipkow@16518: This retrieves all theorems whose name contains \texttt{comm}. nipkow@16518: paulson@16560: Search criteria can also be negated by prefixing them with ``\texttt{-}''. paulson@16560: For example, nipkow@16518: \begin{ttbox} nipkow@16523: -name: List nipkow@16518: \end{ttbox} paulson@16560: finds theorems whose name does not contain \texttt{List}. You can use this paulson@16560: to exclude particular theories from the search: the long name of nipkow@16518: a theorem contains the name of the theory it comes from. nipkow@16518: paulson@16560: Finallly, different search criteria can be combined arbitrarily. webertj@20143: The effect is conjuctive: Find returns the theorems that satisfy all of paulson@16560: the criteria. For example, nipkow@16518: \begin{ttbox} nipkow@16518: "_ + _" -"_ - _" -simp: "_ * (_ + _)" name: assoc nipkow@16518: \end{ttbox} paulson@16560: looks for theorems containing plus but not minus, and which do not simplify paulson@16560: \mbox{@{text"_ * (_ + _)"}} at the root, and whose name contains \texttt{assoc}. nipkow@16518: nipkow@16544: Further search criteria are explained in \S\ref{sec:find2}. nipkow@16523: nipkow@16523: \begin{pgnote} nipkow@16523: Proof General keeps a history of all your search expressions. nipkow@16523: If you click on \pgmenu{Find}, you can use the arrow keys to scroll nipkow@16523: through previous searches and just modify them. This saves you having nipkow@16523: to type in lengthy expressions again and again. nipkow@16523: \end{pgnote} nipkow@16518: *} nipkow@9932: (*<*) wenzelm@9922: end nipkow@9932: (*>*)