doc-src/Functions/Thy/Functions.thy
author Walther Neuper <neuper@ist.tugraz.at>
Thu, 12 Aug 2010 15:03:34 +0200
branchisac-from-Isabelle2009-2
changeset 37913 20e3616b2d9c
parent 33856 14a658faadb6
child 39985 06fc1a79b4bf
permissions -rw-r--r--
prepare reactivation of isac-update-Isa09-2
     1 (*  Title:      doc-src/IsarAdvanced/Functions/Thy/Fundefs.thy
     2     Author:     Alexander Krauss, TU Muenchen
     3 
     4 Tutorial for function definitions with the new "function" package.
     5 *)
     6 
     7 theory Functions
     8 imports Main
     9 begin
    10 
    11 section {* Function Definitions for Dummies *}
    12 
    13 text {*
    14   In most cases, defining a recursive function is just as simple as other definitions:
    15 *}
    16 
    17 fun fib :: "nat \<Rightarrow> nat"
    18 where
    19   "fib 0 = 1"
    20 | "fib (Suc 0) = 1"
    21 | "fib (Suc (Suc n)) = fib n + fib (Suc n)"
    22 
    23 text {*
    24   The syntax is rather self-explanatory: We introduce a function by
    25   giving its name, its type, 
    26   and a set of defining recursive equations.
    27   If we leave out the type, the most general type will be
    28   inferred, which can sometimes lead to surprises: Since both @{term
    29   "1::nat"} and @{text "+"} are overloaded, we would end up
    30   with @{text "fib :: nat \<Rightarrow> 'a::{one,plus}"}.
    31 *}
    32 
    33 text {*
    34   The function always terminates, since its argument gets smaller in
    35   every recursive call. 
    36   Since HOL is a logic of total functions, termination is a
    37   fundamental requirement to prevent inconsistencies\footnote{From the
    38   \qt{definition} @{text "f(n) = f(n) + 1"} we could prove 
    39   @{text "0 = 1"} by subtracting @{text "f(n)"} on both sides.}.
    40   Isabelle tries to prove termination automatically when a definition
    41   is made. In \S\ref{termination}, we will look at cases where this
    42   fails and see what to do then.
    43 *}
    44 
    45 subsection {* Pattern matching *}
    46 
    47 text {* \label{patmatch}
    48   Like in functional programming, we can use pattern matching to
    49   define functions. At the moment we will only consider \emph{constructor
    50   patterns}, which only consist of datatype constructors and
    51   variables. Furthermore, patterns must be linear, i.e.\ all variables
    52   on the left hand side of an equation must be distinct. In
    53   \S\ref{genpats} we discuss more general pattern matching.
    54 
    55   If patterns overlap, the order of the equations is taken into
    56   account. The following function inserts a fixed element between any
    57   two elements of a list:
    58 *}
    59 
    60 fun sep :: "'a \<Rightarrow> 'a list \<Rightarrow> 'a list"
    61 where
    62   "sep a (x#y#xs) = x # a # sep a (y # xs)"
    63 | "sep a xs       = xs"
    64 
    65 text {* 
    66   Overlapping patterns are interpreted as \qt{increments} to what is
    67   already there: The second equation is only meant for the cases where
    68   the first one does not match. Consequently, Isabelle replaces it
    69   internally by the remaining cases, making the patterns disjoint:
    70 *}
    71 
    72 thm sep.simps
    73 
    74 text {* @{thm [display] sep.simps[no_vars]} *}
    75 
    76 text {* 
    77   \noindent The equations from function definitions are automatically used in
    78   simplification:
    79 *}
    80 
    81 lemma "sep 0 [1, 2, 3] = [1, 0, 2, 0, 3]"
    82 by simp
    83 
    84 subsection {* Induction *}
    85 
    86 text {*
    87 
    88   Isabelle provides customized induction rules for recursive
    89   functions. These rules follow the recursive structure of the
    90   definition. Here is the rule @{text sep.induct} arising from the
    91   above definition of @{const sep}:
    92 
    93   @{thm [display] sep.induct}
    94   
    95   We have a step case for list with at least two elements, and two
    96   base cases for the zero- and the one-element list. Here is a simple
    97   proof about @{const sep} and @{const map}
    98 *}
    99 
   100 lemma "map f (sep x ys) = sep (f x) (map f ys)"
   101 apply (induct x ys rule: sep.induct)
   102 
   103 txt {*
   104   We get three cases, like in the definition.
   105 
   106   @{subgoals [display]}
   107 *}
   108 
   109 apply auto 
   110 done
   111 text {*
   112 
   113   With the \cmd{fun} command, you can define about 80\% of the
   114   functions that occur in practice. The rest of this tutorial explains
   115   the remaining 20\%.
   116 *}
   117 
   118 
   119 section {* fun vs.\ function *}
   120 
   121 text {* 
   122   The \cmd{fun} command provides a
   123   convenient shorthand notation for simple function definitions. In
   124   this mode, Isabelle tries to solve all the necessary proof obligations
   125   automatically. If any proof fails, the definition is
   126   rejected. This can either mean that the definition is indeed faulty,
   127   or that the default proof procedures are just not smart enough (or
   128   rather: not designed) to handle the definition.
   129 
   130   By expanding the abbreviation to the more verbose \cmd{function} command, these proof obligations become visible and can be analyzed or
   131   solved manually. The expansion from \cmd{fun} to \cmd{function} is as follows:
   132 
   133 \end{isamarkuptext}
   134 
   135 
   136 \[\left[\;\begin{minipage}{0.25\textwidth}\vspace{6pt}
   137 \cmd{fun} @{text "f :: \<tau>"}\\%
   138 \cmd{where}\\%
   139 \hspace*{2ex}{\it equations}\\%
   140 \hspace*{2ex}\vdots\vspace*{6pt}
   141 \end{minipage}\right]
   142 \quad\equiv\quad
   143 \left[\;\begin{minipage}{0.48\textwidth}\vspace{6pt}
   144 \cmd{function} @{text "("}\cmd{sequential}@{text ") f :: \<tau>"}\\%
   145 \cmd{where}\\%
   146 \hspace*{2ex}{\it equations}\\%
   147 \hspace*{2ex}\vdots\\%
   148 \cmd{by} @{text "pat_completeness auto"}\\%
   149 \cmd{termination by} @{text "lexicographic_order"}\vspace{6pt}
   150 \end{minipage}
   151 \right]\]
   152 
   153 \begin{isamarkuptext}
   154   \vspace*{1em}
   155   \noindent Some details have now become explicit:
   156 
   157   \begin{enumerate}
   158   \item The \cmd{sequential} option enables the preprocessing of
   159   pattern overlaps which we already saw. Without this option, the equations
   160   must already be disjoint and complete. The automatic completion only
   161   works with constructor patterns.
   162 
   163   \item A function definition produces a proof obligation which
   164   expresses completeness and compatibility of patterns (we talk about
   165   this later). The combination of the methods @{text "pat_completeness"} and
   166   @{text "auto"} is used to solve this proof obligation.
   167 
   168   \item A termination proof follows the definition, started by the
   169   \cmd{termination} command. This will be explained in \S\ref{termination}.
   170  \end{enumerate}
   171   Whenever a \cmd{fun} command fails, it is usually a good idea to
   172   expand the syntax to the more verbose \cmd{function} form, to see
   173   what is actually going on.
   174  *}
   175 
   176 
   177 section {* Termination *}
   178 
   179 text {*\label{termination}
   180   The method @{text "lexicographic_order"} is the default method for
   181   termination proofs. It can prove termination of a
   182   certain class of functions by searching for a suitable lexicographic
   183   combination of size measures. Of course, not all functions have such
   184   a simple termination argument. For them, we can specify the termination
   185   relation manually.
   186 *}
   187 
   188 subsection {* The {\tt relation} method *}
   189 text{*
   190   Consider the following function, which sums up natural numbers up to
   191   @{text "N"}, using a counter @{text "i"}:
   192 *}
   193 
   194 function sum :: "nat \<Rightarrow> nat \<Rightarrow> nat"
   195 where
   196   "sum i N = (if i > N then 0 else i + sum (Suc i) N)"
   197 by pat_completeness auto
   198 
   199 text {*
   200   \noindent The @{text "lexicographic_order"} method fails on this example, because none of the
   201   arguments decreases in the recursive call, with respect to the standard size ordering.
   202   To prove termination manually, we must provide a custom wellfounded relation.
   203 
   204   The termination argument for @{text "sum"} is based on the fact that
   205   the \emph{difference} between @{text "i"} and @{text "N"} gets
   206   smaller in every step, and that the recursion stops when @{text "i"}
   207   is greater than @{text "N"}. Phrased differently, the expression 
   208   @{text "N + 1 - i"} always decreases.
   209 
   210   We can use this expression as a measure function suitable to prove termination.
   211 *}
   212 
   213 termination sum
   214 apply (relation "measure (\<lambda>(i,N). N + 1 - i)")
   215 
   216 txt {*
   217   The \cmd{termination} command sets up the termination goal for the
   218   specified function @{text "sum"}. If the function name is omitted, it
   219   implicitly refers to the last function definition.
   220 
   221   The @{text relation} method takes a relation of
   222   type @{typ "('a \<times> 'a) set"}, where @{typ "'a"} is the argument type of
   223   the function. If the function has multiple curried arguments, then
   224   these are packed together into a tuple, as it happened in the above
   225   example.
   226 
   227   The predefined function @{term[source] "measure :: ('a \<Rightarrow> nat) \<Rightarrow> ('a \<times> 'a) set"} constructs a
   228   wellfounded relation from a mapping into the natural numbers (a
   229   \emph{measure function}). 
   230 
   231   After the invocation of @{text "relation"}, we must prove that (a)
   232   the relation we supplied is wellfounded, and (b) that the arguments
   233   of recursive calls indeed decrease with respect to the
   234   relation:
   235 
   236   @{subgoals[display,indent=0]}
   237 
   238   These goals are all solved by @{text "auto"}:
   239 *}
   240 
   241 apply auto
   242 done
   243 
   244 text {*
   245   Let us complicate the function a little, by adding some more
   246   recursive calls: 
   247 *}
   248 
   249 function foo :: "nat \<Rightarrow> nat \<Rightarrow> nat"
   250 where
   251   "foo i N = (if i > N 
   252               then (if N = 0 then 0 else foo 0 (N - 1))
   253               else i + foo (Suc i) N)"
   254 by pat_completeness auto
   255 
   256 text {*
   257   When @{text "i"} has reached @{text "N"}, it starts at zero again
   258   and @{text "N"} is decremented.
   259   This corresponds to a nested
   260   loop where one index counts up and the other down. Termination can
   261   be proved using a lexicographic combination of two measures, namely
   262   the value of @{text "N"} and the above difference. The @{const
   263   "measures"} combinator generalizes @{text "measure"} by taking a
   264   list of measure functions.  
   265 *}
   266 
   267 termination 
   268 by (relation "measures [\<lambda>(i, N). N, \<lambda>(i,N). N + 1 - i]") auto
   269 
   270 subsection {* How @{text "lexicographic_order"} works *}
   271 
   272 (*fun fails :: "nat \<Rightarrow> nat list \<Rightarrow> nat"
   273 where
   274   "fails a [] = a"
   275 | "fails a (x#xs) = fails (x + a) (x # xs)"
   276 *)
   277 
   278 text {*
   279   To see how the automatic termination proofs work, let's look at an
   280   example where it fails\footnote{For a detailed discussion of the
   281   termination prover, see \cite{bulwahnKN07}}:
   282 
   283 \end{isamarkuptext}  
   284 \cmd{fun} @{text "fails :: \"nat \<Rightarrow> nat list \<Rightarrow> nat\""}\\%
   285 \cmd{where}\\%
   286 \hspace*{2ex}@{text "\"fails a [] = a\""}\\%
   287 |\hspace*{1.5ex}@{text "\"fails a (x#xs) = fails (x + a) (x#xs)\""}\\
   288 \begin{isamarkuptext}
   289 
   290 \noindent Isabelle responds with the following error:
   291 
   292 \begin{isabelle}
   293 *** Unfinished subgoals:\newline
   294 *** (a, 1, <):\newline
   295 *** \ 1.~@{text "\<And>x. x = 0"}\newline
   296 *** (a, 1, <=):\newline
   297 *** \ 1.~False\newline
   298 *** (a, 2, <):\newline
   299 *** \ 1.~False\newline
   300 *** Calls:\newline
   301 *** a) @{text "(a, x # xs) -->> (x + a, x # xs)"}\newline
   302 *** Measures:\newline
   303 *** 1) @{text "\<lambda>x. size (fst x)"}\newline
   304 *** 2) @{text "\<lambda>x. size (snd x)"}\newline
   305 *** Result matrix:\newline
   306 *** \ \ \ \ 1\ \ 2  \newline
   307 *** a:  ?   <= \newline
   308 *** Could not find lexicographic termination order.\newline
   309 *** At command "fun".\newline
   310 \end{isabelle}
   311 *}
   312 text {*
   313   The key to this error message is the matrix at the bottom. The rows
   314   of that matrix correspond to the different recursive calls (In our
   315   case, there is just one). The columns are the function's arguments 
   316   (expressed through different measure functions, which map the
   317   argument tuple to a natural number). 
   318 
   319   The contents of the matrix summarize what is known about argument
   320   descents: The second argument has a weak descent (@{text "<="}) at the
   321   recursive call, and for the first argument nothing could be proved,
   322   which is expressed by @{text "?"}. In general, there are the values
   323   @{text "<"}, @{text "<="} and @{text "?"}.
   324 
   325   For the failed proof attempts, the unfinished subgoals are also
   326   printed. Looking at these will often point to a missing lemma.
   327 *}
   328 
   329 subsection {* The @{text size_change} method *}
   330 
   331 text {*
   332   Some termination goals that are beyond the powers of
   333   @{text lexicographic_order} can be solved automatically by the
   334   more powerful @{text size_change} method, which uses a variant of
   335   the size-change principle, together with some other
   336   techniques. While the details are discussed
   337   elsewhere\cite{krauss_phd},
   338   here are a few typical situations where
   339   @{text lexicographic_order} has difficulties and @{text size_change}
   340   may be worth a try:
   341   \begin{itemize}
   342   \item Arguments are permuted in a recursive call.
   343   \item Several mutually recursive functions with multiple arguments.
   344   \item Unusual control flow (e.g., when some recursive calls cannot
   345   occur in sequence).
   346   \end{itemize}
   347 
   348   Loading the theory @{text Multiset} makes the @{text size_change}
   349   method a bit stronger: it can then use multiset orders internally.
   350 *}
   351 
   352 section {* Mutual Recursion *}
   353 
   354 text {*
   355   If two or more functions call one another mutually, they have to be defined
   356   in one step. Here are @{text "even"} and @{text "odd"}:
   357 *}
   358 
   359 function even :: "nat \<Rightarrow> bool"
   360     and odd  :: "nat \<Rightarrow> bool"
   361 where
   362   "even 0 = True"
   363 | "odd 0 = False"
   364 | "even (Suc n) = odd n"
   365 | "odd (Suc n) = even n"
   366 by pat_completeness auto
   367 
   368 text {*
   369   To eliminate the mutual dependencies, Isabelle internally
   370   creates a single function operating on the sum
   371   type @{typ "nat + nat"}. Then, @{const even} and @{const odd} are
   372   defined as projections. Consequently, termination has to be proved
   373   simultaneously for both functions, by specifying a measure on the
   374   sum type: 
   375 *}
   376 
   377 termination 
   378 by (relation "measure (\<lambda>x. case x of Inl n \<Rightarrow> n | Inr n \<Rightarrow> n)") auto
   379 
   380 text {* 
   381   We could also have used @{text lexicographic_order}, which
   382   supports mutual recursive termination proofs to a certain extent.
   383 *}
   384 
   385 subsection {* Induction for mutual recursion *}
   386 
   387 text {*
   388 
   389   When functions are mutually recursive, proving properties about them
   390   generally requires simultaneous induction. The induction rule @{text "even_odd.induct"}
   391   generated from the above definition reflects this.
   392 
   393   Let us prove something about @{const even} and @{const odd}:
   394 *}
   395 
   396 lemma even_odd_mod2:
   397   "even n = (n mod 2 = 0)"
   398   "odd n = (n mod 2 = 1)"
   399 
   400 txt {* 
   401   We apply simultaneous induction, specifying the induction variable
   402   for both goals, separated by \cmd{and}:  *}
   403 
   404 apply (induct n and n rule: even_odd.induct)
   405 
   406 txt {* 
   407   We get four subgoals, which correspond to the clauses in the
   408   definition of @{const even} and @{const odd}:
   409   @{subgoals[display,indent=0]}
   410   Simplification solves the first two goals, leaving us with two
   411   statements about the @{text "mod"} operation to prove:
   412 *}
   413 
   414 apply simp_all
   415 
   416 txt {* 
   417   @{subgoals[display,indent=0]} 
   418 
   419   \noindent These can be handled by Isabelle's arithmetic decision procedures.
   420   
   421 *}
   422 
   423 apply arith
   424 apply arith
   425 done
   426 
   427 text {*
   428   In proofs like this, the simultaneous induction is really essential:
   429   Even if we are just interested in one of the results, the other
   430   one is necessary to strengthen the induction hypothesis. If we leave
   431   out the statement about @{const odd} and just write @{term True} instead,
   432   the same proof fails:
   433 *}
   434 
   435 lemma failed_attempt:
   436   "even n = (n mod 2 = 0)"
   437   "True"
   438 apply (induct n rule: even_odd.induct)
   439 
   440 txt {*
   441   \noindent Now the third subgoal is a dead end, since we have no
   442   useful induction hypothesis available:
   443 
   444   @{subgoals[display,indent=0]} 
   445 *}
   446 
   447 oops
   448 
   449 section {* General pattern matching *}
   450 text{*\label{genpats} *}
   451 
   452 subsection {* Avoiding automatic pattern splitting *}
   453 
   454 text {*
   455 
   456   Up to now, we used pattern matching only on datatypes, and the
   457   patterns were always disjoint and complete, and if they weren't,
   458   they were made disjoint automatically like in the definition of
   459   @{const "sep"} in \S\ref{patmatch}.
   460 
   461   This automatic splitting can significantly increase the number of
   462   equations involved, and this is not always desirable. The following
   463   example shows the problem:
   464   
   465   Suppose we are modeling incomplete knowledge about the world by a
   466   three-valued datatype, which has values @{term "T"}, @{term "F"}
   467   and @{term "X"} for true, false and uncertain propositions, respectively. 
   468 *}
   469 
   470 datatype P3 = T | F | X
   471 
   472 text {* \noindent Then the conjunction of such values can be defined as follows: *}
   473 
   474 fun And :: "P3 \<Rightarrow> P3 \<Rightarrow> P3"
   475 where
   476   "And T p = p"
   477 | "And p T = p"
   478 | "And p F = F"
   479 | "And F p = F"
   480 | "And X X = X"
   481 
   482 
   483 text {* 
   484   This definition is useful, because the equations can directly be used
   485   as simplification rules. But the patterns overlap: For example,
   486   the expression @{term "And T T"} is matched by both the first and
   487   the second equation. By default, Isabelle makes the patterns disjoint by
   488   splitting them up, producing instances:
   489 *}
   490 
   491 thm And.simps
   492 
   493 text {*
   494   @{thm[indent=4] And.simps}
   495   
   496   \vspace*{1em}
   497   \noindent There are several problems with this:
   498 
   499   \begin{enumerate}
   500   \item If the datatype has many constructors, there can be an
   501   explosion of equations. For @{const "And"}, we get seven instead of
   502   five equations, which can be tolerated, but this is just a small
   503   example.
   504 
   505   \item Since splitting makes the equations \qt{less general}, they
   506   do not always match in rewriting. While the term @{term "And x F"}
   507   can be simplified to @{term "F"} with the original equations, a
   508   (manual) case split on @{term "x"} is now necessary.
   509 
   510   \item The splitting also concerns the induction rule @{text
   511   "And.induct"}. Instead of five premises it now has seven, which
   512   means that our induction proofs will have more cases.
   513 
   514   \item In general, it increases clarity if we get the same definition
   515   back which we put in.
   516   \end{enumerate}
   517 
   518   If we do not want the automatic splitting, we can switch it off by
   519   leaving out the \cmd{sequential} option. However, we will have to
   520   prove that our pattern matching is consistent\footnote{This prevents
   521   us from defining something like @{term "f x = True"} and @{term "f x
   522   = False"} simultaneously.}:
   523 *}
   524 
   525 function And2 :: "P3 \<Rightarrow> P3 \<Rightarrow> P3"
   526 where
   527   "And2 T p = p"
   528 | "And2 p T = p"
   529 | "And2 p F = F"
   530 | "And2 F p = F"
   531 | "And2 X X = X"
   532 
   533 txt {*
   534   \noindent Now let's look at the proof obligations generated by a
   535   function definition. In this case, they are:
   536 
   537   @{subgoals[display,indent=0]}\vspace{-1.2em}\hspace{3cm}\vdots\vspace{1.2em}
   538 
   539   The first subgoal expresses the completeness of the patterns. It has
   540   the form of an elimination rule and states that every @{term x} of
   541   the function's input type must match at least one of the patterns\footnote{Completeness could
   542   be equivalently stated as a disjunction of existential statements: 
   543 @{term "(\<exists>p. x = (T, p)) \<or> (\<exists>p. x = (p, T)) \<or> (\<exists>p. x = (p, F)) \<or>
   544   (\<exists>p. x = (F, p)) \<or> (x = (X, X))"}, and you can use the method @{text atomize_elim} to get that form instead.}. If the patterns just involve
   545   datatypes, we can solve it with the @{text "pat_completeness"}
   546   method:
   547 *}
   548 
   549 apply pat_completeness
   550 
   551 txt {*
   552   The remaining subgoals express \emph{pattern compatibility}. We do
   553   allow that an input value matches multiple patterns, but in this
   554   case, the result (i.e.~the right hand sides of the equations) must
   555   also be equal. For each pair of two patterns, there is one such
   556   subgoal. Usually this needs injectivity of the constructors, which
   557   is used automatically by @{text "auto"}.
   558 *}
   559 
   560 by auto
   561 
   562 
   563 subsection {* Non-constructor patterns *}
   564 
   565 text {*
   566   Most of Isabelle's basic types take the form of inductive datatypes,
   567   and usually pattern matching works on the constructors of such types. 
   568   However, this need not be always the case, and the \cmd{function}
   569   command handles other kind of patterns, too.
   570 
   571   One well-known instance of non-constructor patterns are
   572   so-called \emph{$n+k$-patterns}, which are a little controversial in
   573   the functional programming world. Here is the initial fibonacci
   574   example with $n+k$-patterns:
   575 *}
   576 
   577 function fib2 :: "nat \<Rightarrow> nat"
   578 where
   579   "fib2 0 = 1"
   580 | "fib2 1 = 1"
   581 | "fib2 (n + 2) = fib2 n + fib2 (Suc n)"
   582 
   583 (*<*)ML_val "goals_limit := 1"(*>*)
   584 txt {*
   585   This kind of matching is again justified by the proof of pattern
   586   completeness and compatibility. 
   587   The proof obligation for pattern completeness states that every natural number is
   588   either @{term "0::nat"}, @{term "1::nat"} or @{term "n +
   589   (2::nat)"}:
   590 
   591   @{subgoals[display,indent=0]}
   592 
   593   This is an arithmetic triviality, but unfortunately the
   594   @{text arith} method cannot handle this specific form of an
   595   elimination rule. However, we can use the method @{text
   596   "atomize_elim"} to do an ad-hoc conversion to a disjunction of
   597   existentials, which can then be solved by the arithmetic decision procedure.
   598   Pattern compatibility and termination are automatic as usual.
   599 *}
   600 (*<*)ML_val "goals_limit := 10"(*>*)
   601 apply atomize_elim
   602 apply arith
   603 apply auto
   604 done
   605 termination by lexicographic_order
   606 text {*
   607   We can stretch the notion of pattern matching even more. The
   608   following function is not a sensible functional program, but a
   609   perfectly valid mathematical definition:
   610 *}
   611 
   612 function ev :: "nat \<Rightarrow> bool"
   613 where
   614   "ev (2 * n) = True"
   615 | "ev (2 * n + 1) = False"
   616 apply atomize_elim
   617 by arith+
   618 termination by (relation "{}") simp
   619 
   620 text {*
   621   This general notion of pattern matching gives you a certain freedom
   622   in writing down specifications. However, as always, such freedom should
   623   be used with care:
   624 
   625   If we leave the area of constructor
   626   patterns, we have effectively departed from the world of functional
   627   programming. This means that it is no longer possible to use the
   628   code generator, and expect it to generate ML code for our
   629   definitions. Also, such a specification might not work very well together with
   630   simplification. Your mileage may vary.
   631 *}
   632 
   633 
   634 subsection {* Conditional equations *}
   635 
   636 text {* 
   637   The function package also supports conditional equations, which are
   638   similar to guards in a language like Haskell. Here is Euclid's
   639   algorithm written with conditional patterns\footnote{Note that the
   640   patterns are also overlapping in the base case}:
   641 *}
   642 
   643 function gcd :: "nat \<Rightarrow> nat \<Rightarrow> nat"
   644 where
   645   "gcd x 0 = x"
   646 | "gcd 0 y = y"
   647 | "x < y \<Longrightarrow> gcd (Suc x) (Suc y) = gcd (Suc x) (y - x)"
   648 | "\<not> x < y \<Longrightarrow> gcd (Suc x) (Suc y) = gcd (x - y) (Suc y)"
   649 by (atomize_elim, auto, arith)
   650 termination by lexicographic_order
   651 
   652 text {*
   653   By now, you can probably guess what the proof obligations for the
   654   pattern completeness and compatibility look like. 
   655 
   656   Again, functions with conditional patterns are not supported by the
   657   code generator.
   658 *}
   659 
   660 
   661 subsection {* Pattern matching on strings *}
   662 
   663 text {*
   664   As strings (as lists of characters) are normal datatypes, pattern
   665   matching on them is possible, but somewhat problematic. Consider the
   666   following definition:
   667 
   668 \end{isamarkuptext}
   669 \noindent\cmd{fun} @{text "check :: \"string \<Rightarrow> bool\""}\\%
   670 \cmd{where}\\%
   671 \hspace*{2ex}@{text "\"check (''good'') = True\""}\\%
   672 @{text "| \"check s = False\""}
   673 \begin{isamarkuptext}
   674 
   675   \noindent An invocation of the above \cmd{fun} command does not
   676   terminate. What is the problem? Strings are lists of characters, and
   677   characters are a datatype with a lot of constructors. Splitting the
   678   catch-all pattern thus leads to an explosion of cases, which cannot
   679   be handled by Isabelle.
   680 
   681   There are two things we can do here. Either we write an explicit
   682   @{text "if"} on the right hand side, or we can use conditional patterns:
   683 *}
   684 
   685 function check :: "string \<Rightarrow> bool"
   686 where
   687   "check (''good'') = True"
   688 | "s \<noteq> ''good'' \<Longrightarrow> check s = False"
   689 by auto
   690 
   691 
   692 section {* Partiality *}
   693 
   694 text {* 
   695   In HOL, all functions are total. A function @{term "f"} applied to
   696   @{term "x"} always has the value @{term "f x"}, and there is no notion
   697   of undefinedness. 
   698   This is why we have to do termination
   699   proofs when defining functions: The proof justifies that the
   700   function can be defined by wellfounded recursion.
   701 
   702   However, the \cmd{function} package does support partiality to a
   703   certain extent. Let's look at the following function which looks
   704   for a zero of a given function f. 
   705 *}
   706 
   707 function (*<*)(domintros, tailrec)(*>*)findzero :: "(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat"
   708 where
   709   "findzero f n = (if f n = 0 then n else findzero f (Suc n))"
   710 by pat_completeness auto
   711 (*<*)declare findzero.simps[simp del](*>*)
   712 
   713 text {*
   714   \noindent Clearly, any attempt of a termination proof must fail. And without
   715   that, we do not get the usual rules @{text "findzero.simps"} and 
   716   @{text "findzero.induct"}. So what was the definition good for at all?
   717 *}
   718 
   719 subsection {* Domain predicates *}
   720 
   721 text {*
   722   The trick is that Isabelle has not only defined the function @{const findzero}, but also
   723   a predicate @{term "findzero_dom"} that characterizes the values where the function
   724   terminates: the \emph{domain} of the function. If we treat a
   725   partial function just as a total function with an additional domain
   726   predicate, we can derive simplification and
   727   induction rules as we do for total functions. They are guarded
   728   by domain conditions and are called @{text psimps} and @{text
   729   pinduct}: 
   730 *}
   731 
   732 text {*
   733   \noindent\begin{minipage}{0.79\textwidth}@{thm[display,margin=85] findzero.psimps}\end{minipage}
   734   \hfill(@{text "findzero.psimps"})
   735   \vspace{1em}
   736 
   737   \noindent\begin{minipage}{0.79\textwidth}@{thm[display,margin=85] findzero.pinduct}\end{minipage}
   738   \hfill(@{text "findzero.pinduct"})
   739 *}
   740 
   741 text {*
   742   Remember that all we
   743   are doing here is use some tricks to make a total function appear
   744   as if it was partial. We can still write the term @{term "findzero
   745   (\<lambda>x. 1) 0"} and like any other term of type @{typ nat} it is equal
   746   to some natural number, although we might not be able to find out
   747   which one. The function is \emph{underdefined}.
   748 
   749   But it is defined enough to prove something interesting about it. We
   750   can prove that if @{term "findzero f n"}
   751   terminates, it indeed returns a zero of @{term f}:
   752 *}
   753 
   754 lemma findzero_zero: "findzero_dom (f, n) \<Longrightarrow> f (findzero f n) = 0"
   755 
   756 txt {* \noindent We apply induction as usual, but using the partial induction
   757   rule: *}
   758 
   759 apply (induct f n rule: findzero.pinduct)
   760 
   761 txt {* \noindent This gives the following subgoals:
   762 
   763   @{subgoals[display,indent=0]}
   764 
   765   \noindent The hypothesis in our lemma was used to satisfy the first premise in
   766   the induction rule. However, we also get @{term
   767   "findzero_dom (f, n)"} as a local assumption in the induction step. This
   768   allows to unfold @{term "findzero f n"} using the @{text psimps}
   769   rule, and the rest is trivial. Since the @{text psimps} rules carry the
   770   @{text "[simp]"} attribute by default, we just need a single step:
   771  *}
   772 apply simp
   773 done
   774 
   775 text {*
   776   Proofs about partial functions are often not harder than for total
   777   functions. Fig.~\ref{findzero_isar} shows a slightly more
   778   complicated proof written in Isar. It is verbose enough to show how
   779   partiality comes into play: From the partial induction, we get an
   780   additional domain condition hypothesis. Observe how this condition
   781   is applied when calls to @{term findzero} are unfolded.
   782 *}
   783 
   784 text_raw {*
   785 \begin{figure}
   786 \hrule\vspace{6pt}
   787 \begin{minipage}{0.8\textwidth}
   788 \isabellestyle{it}
   789 \isastyle\isamarkuptrue
   790 *}
   791 lemma "\<lbrakk>findzero_dom (f, n); x \<in> {n ..< findzero f n}\<rbrakk> \<Longrightarrow> f x \<noteq> 0"
   792 proof (induct rule: findzero.pinduct)
   793   fix f n assume dom: "findzero_dom (f, n)"
   794                and IH: "\<lbrakk>f n \<noteq> 0; x \<in> {Suc n ..< findzero f (Suc n)}\<rbrakk> \<Longrightarrow> f x \<noteq> 0"
   795                and x_range: "x \<in> {n ..< findzero f n}"
   796   have "f n \<noteq> 0"
   797   proof 
   798     assume "f n = 0"
   799     with dom have "findzero f n = n" by simp
   800     with x_range show False by auto
   801   qed
   802   
   803   from x_range have "x = n \<or> x \<in> {Suc n ..< findzero f n}" by auto
   804   thus "f x \<noteq> 0"
   805   proof
   806     assume "x = n"
   807     with `f n \<noteq> 0` show ?thesis by simp
   808   next
   809     assume "x \<in> {Suc n ..< findzero f n}"
   810     with dom and `f n \<noteq> 0` have "x \<in> {Suc n ..< findzero f (Suc n)}" by simp
   811     with IH and `f n \<noteq> 0`
   812     show ?thesis by simp
   813   qed
   814 qed
   815 text_raw {*
   816 \isamarkupfalse\isabellestyle{tt}
   817 \end{minipage}\vspace{6pt}\hrule
   818 \caption{A proof about a partial function}\label{findzero_isar}
   819 \end{figure}
   820 *}
   821 
   822 subsection {* Partial termination proofs *}
   823 
   824 text {*
   825   Now that we have proved some interesting properties about our
   826   function, we should turn to the domain predicate and see if it is
   827   actually true for some values. Otherwise we would have just proved
   828   lemmas with @{term False} as a premise.
   829 
   830   Essentially, we need some introduction rules for @{text
   831   findzero_dom}. The function package can prove such domain
   832   introduction rules automatically. But since they are not used very
   833   often (they are almost never needed if the function is total), this
   834   functionality is disabled by default for efficiency reasons. So we have to go
   835   back and ask for them explicitly by passing the @{text
   836   "(domintros)"} option to the function package:
   837 
   838 \vspace{1ex}
   839 \noindent\cmd{function} @{text "(domintros) findzero :: \"(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat\""}\\%
   840 \cmd{where}\isanewline%
   841 \ \ \ldots\\
   842 
   843   \noindent Now the package has proved an introduction rule for @{text findzero_dom}:
   844 *}
   845 
   846 thm findzero.domintros
   847 
   848 text {*
   849   @{thm[display] findzero.domintros}
   850 
   851   Domain introduction rules allow to show that a given value lies in the
   852   domain of a function, if the arguments of all recursive calls
   853   are in the domain as well. They allow to do a \qt{single step} in a
   854   termination proof. Usually, you want to combine them with a suitable
   855   induction principle.
   856 
   857   Since our function increases its argument at recursive calls, we
   858   need an induction principle which works \qt{backwards}. We will use
   859   @{text inc_induct}, which allows to do induction from a fixed number
   860   \qt{downwards}:
   861 
   862   \begin{center}@{thm inc_induct}\hfill(@{text "inc_induct"})\end{center}
   863 
   864   Figure \ref{findzero_term} gives a detailed Isar proof of the fact
   865   that @{text findzero} terminates if there is a zero which is greater
   866   or equal to @{term n}. First we derive two useful rules which will
   867   solve the base case and the step case of the induction. The
   868   induction is then straightforward, except for the unusual induction
   869   principle.
   870 
   871 *}
   872 
   873 text_raw {*
   874 \begin{figure}
   875 \hrule\vspace{6pt}
   876 \begin{minipage}{0.8\textwidth}
   877 \isabellestyle{it}
   878 \isastyle\isamarkuptrue
   879 *}
   880 lemma findzero_termination:
   881   assumes "x \<ge> n" and "f x = 0"
   882   shows "findzero_dom (f, n)"
   883 proof - 
   884   have base: "findzero_dom (f, x)"
   885     by (rule findzero.domintros) (simp add:`f x = 0`)
   886 
   887   have step: "\<And>i. findzero_dom (f, Suc i) 
   888     \<Longrightarrow> findzero_dom (f, i)"
   889     by (rule findzero.domintros) simp
   890 
   891   from `x \<ge> n` show ?thesis
   892   proof (induct rule:inc_induct)
   893     show "findzero_dom (f, x)" by (rule base)
   894   next
   895     fix i assume "findzero_dom (f, Suc i)"
   896     thus "findzero_dom (f, i)" by (rule step)
   897   qed
   898 qed      
   899 text_raw {*
   900 \isamarkupfalse\isabellestyle{tt}
   901 \end{minipage}\vspace{6pt}\hrule
   902 \caption{Termination proof for @{text findzero}}\label{findzero_term}
   903 \end{figure}
   904 *}
   905       
   906 text {*
   907   Again, the proof given in Fig.~\ref{findzero_term} has a lot of
   908   detail in order to explain the principles. Using more automation, we
   909   can also have a short proof:
   910 *}
   911 
   912 lemma findzero_termination_short:
   913   assumes zero: "x >= n" 
   914   assumes [simp]: "f x = 0"
   915   shows "findzero_dom (f, n)"
   916 using zero
   917 by (induct rule:inc_induct) (auto intro: findzero.domintros)
   918     
   919 text {*
   920   \noindent It is simple to combine the partial correctness result with the
   921   termination lemma:
   922 *}
   923 
   924 lemma findzero_total_correctness:
   925   "f x = 0 \<Longrightarrow> f (findzero f 0) = 0"
   926 by (blast intro: findzero_zero findzero_termination)
   927 
   928 subsection {* Definition of the domain predicate *}
   929 
   930 text {*
   931   Sometimes it is useful to know what the definition of the domain
   932   predicate looks like. Actually, @{text findzero_dom} is just an
   933   abbreviation:
   934 
   935   @{abbrev[display] findzero_dom}
   936 
   937   The domain predicate is the \emph{accessible part} of a relation @{const
   938   findzero_rel}, which was also created internally by the function
   939   package. @{const findzero_rel} is just a normal
   940   inductive predicate, so we can inspect its definition by
   941   looking at the introduction rules @{text findzero_rel.intros}.
   942   In our case there is just a single rule:
   943 
   944   @{thm[display] findzero_rel.intros}
   945 
   946   The predicate @{const findzero_rel}
   947   describes the \emph{recursion relation} of the function
   948   definition. The recursion relation is a binary relation on
   949   the arguments of the function that relates each argument to its
   950   recursive calls. In general, there is one introduction rule for each
   951   recursive call.
   952 
   953   The predicate @{term "accp findzero_rel"} is the accessible part of
   954   that relation. An argument belongs to the accessible part, if it can
   955   be reached in a finite number of steps (cf.~its definition in @{text
   956   "Wellfounded.thy"}).
   957 
   958   Since the domain predicate is just an abbreviation, you can use
   959   lemmas for @{const accp} and @{const findzero_rel} directly. Some
   960   lemmas which are occasionally useful are @{text accpI}, @{text
   961   accp_downward}, and of course the introduction and elimination rules
   962   for the recursion relation @{text "findzero.intros"} and @{text "findzero.cases"}.
   963 *}
   964 
   965 (*lemma findzero_nicer_domintros:
   966   "f x = 0 \<Longrightarrow> findzero_dom (f, x)"
   967   "findzero_dom (f, Suc x) \<Longrightarrow> findzero_dom (f, x)"
   968 by (rule accpI, erule findzero_rel.cases, auto)+
   969 *)
   970   
   971 subsection {* A Useful Special Case: Tail recursion *}
   972 
   973 text {*
   974   The domain predicate is our trick that allows us to model partiality
   975   in a world of total functions. The downside of this is that we have
   976   to carry it around all the time. The termination proof above allowed
   977   us to replace the abstract @{term "findzero_dom (f, n)"} by the more
   978   concrete @{term "(x \<ge> n \<and> f x = (0::nat))"}, but the condition is still
   979   there and can only be discharged for special cases.
   980   In particular, the domain predicate guards the unfolding of our
   981   function, since it is there as a condition in the @{text psimp}
   982   rules. 
   983 
   984   Now there is an important special case: We can actually get rid
   985   of the condition in the simplification rules, \emph{if the function
   986   is tail-recursive}. The reason is that for all tail-recursive
   987   equations there is a total function satisfying them, even if they
   988   are non-terminating. 
   989 
   990 %  A function is tail recursive, if each call to the function is either
   991 %  equal
   992 %
   993 %  So the outer form of the 
   994 %
   995 %if it can be written in the following
   996 %  form:
   997 %  {term[display] "f x = (if COND x then BASE x else f (LOOP x))"}
   998 
   999 
  1000   The function package internally does the right construction and can
  1001   derive the unconditional simp rules, if we ask it to do so. Luckily,
  1002   our @{const "findzero"} function is tail-recursive, so we can just go
  1003   back and add another option to the \cmd{function} command:
  1004 
  1005 \vspace{1ex}
  1006 \noindent\cmd{function} @{text "(domintros, tailrec) findzero :: \"(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat\""}\\%
  1007 \cmd{where}\isanewline%
  1008 \ \ \ldots\\%
  1009 
  1010   
  1011   \noindent Now, we actually get unconditional simplification rules, even
  1012   though the function is partial:
  1013 *}
  1014 
  1015 thm findzero.simps
  1016 
  1017 text {*
  1018   @{thm[display] findzero.simps}
  1019 
  1020   \noindent Of course these would make the simplifier loop, so we better remove
  1021   them from the simpset:
  1022 *}
  1023 
  1024 declare findzero.simps[simp del]
  1025 
  1026 text {* 
  1027   Getting rid of the domain conditions in the simplification rules is
  1028   not only useful because it simplifies proofs. It is also required in
  1029   order to use Isabelle's code generator to generate ML code
  1030   from a function definition.
  1031   Since the code generator only works with equations, it cannot be
  1032   used with @{text "psimp"} rules. Thus, in order to generate code for
  1033   partial functions, they must be defined as a tail recursion.
  1034   Luckily, many functions have a relatively natural tail recursive
  1035   definition.
  1036 *}
  1037 
  1038 section {* Nested recursion *}
  1039 
  1040 text {*
  1041   Recursive calls which are nested in one another frequently cause
  1042   complications, since their termination proof can depend on a partial
  1043   correctness property of the function itself. 
  1044 
  1045   As a small example, we define the \qt{nested zero} function:
  1046 *}
  1047 
  1048 function nz :: "nat \<Rightarrow> nat"
  1049 where
  1050   "nz 0 = 0"
  1051 | "nz (Suc n) = nz (nz n)"
  1052 by pat_completeness auto
  1053 
  1054 text {*
  1055   If we attempt to prove termination using the identity measure on
  1056   naturals, this fails:
  1057 *}
  1058 
  1059 termination
  1060   apply (relation "measure (\<lambda>n. n)")
  1061   apply auto
  1062 
  1063 txt {*
  1064   We get stuck with the subgoal
  1065 
  1066   @{subgoals[display]}
  1067 
  1068   Of course this statement is true, since we know that @{const nz} is
  1069   the zero function. And in fact we have no problem proving this
  1070   property by induction.
  1071 *}
  1072 (*<*)oops(*>*)
  1073 lemma nz_is_zero: "nz_dom n \<Longrightarrow> nz n = 0"
  1074   by (induct rule:nz.pinduct) auto
  1075 
  1076 text {*
  1077   We formulate this as a partial correctness lemma with the condition
  1078   @{term "nz_dom n"}. This allows us to prove it with the @{text
  1079   pinduct} rule before we have proved termination. With this lemma,
  1080   the termination proof works as expected:
  1081 *}
  1082 
  1083 termination
  1084   by (relation "measure (\<lambda>n. n)") (auto simp: nz_is_zero)
  1085 
  1086 text {*
  1087   As a general strategy, one should prove the statements needed for
  1088   termination as a partial property first. Then they can be used to do
  1089   the termination proof. This also works for less trivial
  1090   examples. Figure \ref{f91} defines the 91-function, a well-known
  1091   challenge problem due to John McCarthy, and proves its termination.
  1092 *}
  1093 
  1094 text_raw {*
  1095 \begin{figure}
  1096 \hrule\vspace{6pt}
  1097 \begin{minipage}{0.8\textwidth}
  1098 \isabellestyle{it}
  1099 \isastyle\isamarkuptrue
  1100 *}
  1101 
  1102 function f91 :: "nat \<Rightarrow> nat"
  1103 where
  1104   "f91 n = (if 100 < n then n - 10 else f91 (f91 (n + 11)))"
  1105 by pat_completeness auto
  1106 
  1107 lemma f91_estimate: 
  1108   assumes trm: "f91_dom n" 
  1109   shows "n < f91 n + 11"
  1110 using trm by induct auto
  1111 
  1112 termination
  1113 proof
  1114   let ?R = "measure (\<lambda>x. 101 - x)"
  1115   show "wf ?R" ..
  1116 
  1117   fix n :: nat assume "\<not> 100 < n" -- "Assumptions for both calls"
  1118 
  1119   thus "(n + 11, n) \<in> ?R" by simp -- "Inner call"
  1120 
  1121   assume inner_trm: "f91_dom (n + 11)" -- "Outer call"
  1122   with f91_estimate have "n + 11 < f91 (n + 11) + 11" .
  1123   with `\<not> 100 < n` show "(f91 (n + 11), n) \<in> ?R" by simp
  1124 qed
  1125 
  1126 text_raw {*
  1127 \isamarkupfalse\isabellestyle{tt}
  1128 \end{minipage}
  1129 \vspace{6pt}\hrule
  1130 \caption{McCarthy's 91-function}\label{f91}
  1131 \end{figure}
  1132 *}
  1133 
  1134 
  1135 section {* Higher-Order Recursion *}
  1136 
  1137 text {*
  1138   Higher-order recursion occurs when recursive calls
  1139   are passed as arguments to higher-order combinators such as @{const
  1140   map}, @{term filter} etc.
  1141   As an example, imagine a datatype of n-ary trees:
  1142 *}
  1143 
  1144 datatype 'a tree = 
  1145   Leaf 'a 
  1146 | Branch "'a tree list"
  1147 
  1148 
  1149 text {* \noindent We can define a function which swaps the left and right subtrees recursively, using the 
  1150   list functions @{const rev} and @{const map}: *}
  1151 
  1152 fun mirror :: "'a tree \<Rightarrow> 'a tree"
  1153 where
  1154   "mirror (Leaf n) = Leaf n"
  1155 | "mirror (Branch l) = Branch (rev (map mirror l))"
  1156 
  1157 text {*
  1158   Although the definition is accepted without problems, let us look at the termination proof:
  1159 *}
  1160 
  1161 termination proof
  1162   txt {*
  1163 
  1164   As usual, we have to give a wellfounded relation, such that the
  1165   arguments of the recursive calls get smaller. But what exactly are
  1166   the arguments of the recursive calls when mirror is given as an
  1167   argument to @{const map}? Isabelle gives us the
  1168   subgoals
  1169 
  1170   @{subgoals[display,indent=0]} 
  1171 
  1172   So the system seems to know that @{const map} only
  1173   applies the recursive call @{term "mirror"} to elements
  1174   of @{term "l"}, which is essential for the termination proof.
  1175 
  1176   This knowledge about @{const map} is encoded in so-called congruence rules,
  1177   which are special theorems known to the \cmd{function} command. The
  1178   rule for @{const map} is
  1179 
  1180   @{thm[display] map_cong}
  1181 
  1182   You can read this in the following way: Two applications of @{const
  1183   map} are equal, if the list arguments are equal and the functions
  1184   coincide on the elements of the list. This means that for the value 
  1185   @{term "map f l"} we only have to know how @{term f} behaves on
  1186   the elements of @{term l}.
  1187 
  1188   Usually, one such congruence rule is
  1189   needed for each higher-order construct that is used when defining
  1190   new functions. In fact, even basic functions like @{const
  1191   If} and @{const Let} are handled by this mechanism. The congruence
  1192   rule for @{const If} states that the @{text then} branch is only
  1193   relevant if the condition is true, and the @{text else} branch only if it
  1194   is false:
  1195 
  1196   @{thm[display] if_cong}
  1197   
  1198   Congruence rules can be added to the
  1199   function package by giving them the @{term fundef_cong} attribute.
  1200 
  1201   The constructs that are predefined in Isabelle, usually
  1202   come with the respective congruence rules.
  1203   But if you define your own higher-order functions, you may have to
  1204   state and prove the required congruence rules yourself, if you want to use your
  1205   functions in recursive definitions. 
  1206 *}
  1207 (*<*)oops(*>*)
  1208 
  1209 subsection {* Congruence Rules and Evaluation Order *}
  1210 
  1211 text {* 
  1212   Higher order logic differs from functional programming languages in
  1213   that it has no built-in notion of evaluation order. A program is
  1214   just a set of equations, and it is not specified how they must be
  1215   evaluated. 
  1216 
  1217   However for the purpose of function definition, we must talk about
  1218   evaluation order implicitly, when we reason about termination.
  1219   Congruence rules express that a certain evaluation order is
  1220   consistent with the logical definition. 
  1221 
  1222   Consider the following function.
  1223 *}
  1224 
  1225 function f :: "nat \<Rightarrow> bool"
  1226 where
  1227   "f n = (n = 0 \<or> f (n - 1))"
  1228 (*<*)by pat_completeness auto(*>*)
  1229 
  1230 text {*
  1231   For this definition, the termination proof fails. The default configuration
  1232   specifies no congruence rule for disjunction. We have to add a
  1233   congruence rule that specifies left-to-right evaluation order:
  1234 
  1235   \vspace{1ex}
  1236   \noindent @{thm disj_cong}\hfill(@{text "disj_cong"})
  1237   \vspace{1ex}
  1238 
  1239   Now the definition works without problems. Note how the termination
  1240   proof depends on the extra condition that we get from the congruence
  1241   rule.
  1242 
  1243   However, as evaluation is not a hard-wired concept, we
  1244   could just turn everything around by declaring a different
  1245   congruence rule. Then we can make the reverse definition:
  1246 *}
  1247 
  1248 lemma disj_cong2[fundef_cong]: 
  1249   "(\<not> Q' \<Longrightarrow> P = P') \<Longrightarrow> (Q = Q') \<Longrightarrow> (P \<or> Q) = (P' \<or> Q')"
  1250   by blast
  1251 
  1252 fun f' :: "nat \<Rightarrow> bool"
  1253 where
  1254   "f' n = (f' (n - 1) \<or> n = 0)"
  1255 
  1256 text {*
  1257   \noindent These examples show that, in general, there is no \qt{best} set of
  1258   congruence rules.
  1259 
  1260   However, such tweaking should rarely be necessary in
  1261   practice, as most of the time, the default set of congruence rules
  1262   works well.
  1263 *}
  1264 
  1265 end