1.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
1.2 +++ b/doc-src/Functions/Thy/Functions.thy Tue Mar 03 11:00:51 2009 +0100
1.3 @@ -0,0 +1,1264 @@
1.4 +(* Title: doc-src/IsarAdvanced/Functions/Thy/Fundefs.thy
1.5 + Author: Alexander Krauss, TU Muenchen
1.6 +
1.7 +Tutorial for function definitions with the new "function" package.
1.8 +*)
1.9 +
1.10 +theory Functions
1.11 +imports Main
1.12 +begin
1.13 +
1.14 +section {* Function Definitions for Dummies *}
1.15 +
1.16 +text {*
1.17 + In most cases, defining a recursive function is just as simple as other definitions:
1.18 +*}
1.19 +
1.20 +fun fib :: "nat \<Rightarrow> nat"
1.21 +where
1.22 + "fib 0 = 1"
1.23 +| "fib (Suc 0) = 1"
1.24 +| "fib (Suc (Suc n)) = fib n + fib (Suc n)"
1.25 +
1.26 +text {*
1.27 + The syntax is rather self-explanatory: We introduce a function by
1.28 + giving its name, its type,
1.29 + and a set of defining recursive equations.
1.30 + If we leave out the type, the most general type will be
1.31 + inferred, which can sometimes lead to surprises: Since both @{term
1.32 + "1::nat"} and @{text "+"} are overloaded, we would end up
1.33 + with @{text "fib :: nat \<Rightarrow> 'a::{one,plus}"}.
1.34 +*}
1.35 +
1.36 +text {*
1.37 + The function always terminates, since its argument gets smaller in
1.38 + every recursive call.
1.39 + Since HOL is a logic of total functions, termination is a
1.40 + fundamental requirement to prevent inconsistencies\footnote{From the
1.41 + \qt{definition} @{text "f(n) = f(n) + 1"} we could prove
1.42 + @{text "0 = 1"} by subtracting @{text "f(n)"} on both sides.}.
1.43 + Isabelle tries to prove termination automatically when a definition
1.44 + is made. In \S\ref{termination}, we will look at cases where this
1.45 + fails and see what to do then.
1.46 +*}
1.47 +
1.48 +subsection {* Pattern matching *}
1.49 +
1.50 +text {* \label{patmatch}
1.51 + Like in functional programming, we can use pattern matching to
1.52 + define functions. At the moment we will only consider \emph{constructor
1.53 + patterns}, which only consist of datatype constructors and
1.54 + variables. Furthermore, patterns must be linear, i.e.\ all variables
1.55 + on the left hand side of an equation must be distinct. In
1.56 + \S\ref{genpats} we discuss more general pattern matching.
1.57 +
1.58 + If patterns overlap, the order of the equations is taken into
1.59 + account. The following function inserts a fixed element between any
1.60 + two elements of a list:
1.61 +*}
1.62 +
1.63 +fun sep :: "'a \<Rightarrow> 'a list \<Rightarrow> 'a list"
1.64 +where
1.65 + "sep a (x#y#xs) = x # a # sep a (y # xs)"
1.66 +| "sep a xs = xs"
1.67 +
1.68 +text {*
1.69 + Overlapping patterns are interpreted as \qt{increments} to what is
1.70 + already there: The second equation is only meant for the cases where
1.71 + the first one does not match. Consequently, Isabelle replaces it
1.72 + internally by the remaining cases, making the patterns disjoint:
1.73 +*}
1.74 +
1.75 +thm sep.simps
1.76 +
1.77 +text {* @{thm [display] sep.simps[no_vars]} *}
1.78 +
1.79 +text {*
1.80 + \noindent The equations from function definitions are automatically used in
1.81 + simplification:
1.82 +*}
1.83 +
1.84 +lemma "sep 0 [1, 2, 3] = [1, 0, 2, 0, 3]"
1.85 +by simp
1.86 +
1.87 +subsection {* Induction *}
1.88 +
1.89 +text {*
1.90 +
1.91 + Isabelle provides customized induction rules for recursive
1.92 + functions. These rules follow the recursive structure of the
1.93 + definition. Here is the rule @{text sep.induct} arising from the
1.94 + above definition of @{const sep}:
1.95 +
1.96 + @{thm [display] sep.induct}
1.97 +
1.98 + We have a step case for list with at least two elements, and two
1.99 + base cases for the zero- and the one-element list. Here is a simple
1.100 + proof about @{const sep} and @{const map}
1.101 +*}
1.102 +
1.103 +lemma "map f (sep x ys) = sep (f x) (map f ys)"
1.104 +apply (induct x ys rule: sep.induct)
1.105 +
1.106 +txt {*
1.107 + We get three cases, like in the definition.
1.108 +
1.109 + @{subgoals [display]}
1.110 +*}
1.111 +
1.112 +apply auto
1.113 +done
1.114 +text {*
1.115 +
1.116 + With the \cmd{fun} command, you can define about 80\% of the
1.117 + functions that occur in practice. The rest of this tutorial explains
1.118 + the remaining 20\%.
1.119 +*}
1.120 +
1.121 +
1.122 +section {* fun vs.\ function *}
1.123 +
1.124 +text {*
1.125 + The \cmd{fun} command provides a
1.126 + convenient shorthand notation for simple function definitions. In
1.127 + this mode, Isabelle tries to solve all the necessary proof obligations
1.128 + automatically. If any proof fails, the definition is
1.129 + rejected. This can either mean that the definition is indeed faulty,
1.130 + or that the default proof procedures are just not smart enough (or
1.131 + rather: not designed) to handle the definition.
1.132 +
1.133 + By expanding the abbreviation to the more verbose \cmd{function} command, these proof obligations become visible and can be analyzed or
1.134 + solved manually. The expansion from \cmd{fun} to \cmd{function} is as follows:
1.135 +
1.136 +\end{isamarkuptext}
1.137 +
1.138 +
1.139 +\[\left[\;\begin{minipage}{0.25\textwidth}\vspace{6pt}
1.140 +\cmd{fun} @{text "f :: \<tau>"}\\%
1.141 +\cmd{where}\\%
1.142 +\hspace*{2ex}{\it equations}\\%
1.143 +\hspace*{2ex}\vdots\vspace*{6pt}
1.144 +\end{minipage}\right]
1.145 +\quad\equiv\quad
1.146 +\left[\;\begin{minipage}{0.48\textwidth}\vspace{6pt}
1.147 +\cmd{function} @{text "("}\cmd{sequential}@{text ") f :: \<tau>"}\\%
1.148 +\cmd{where}\\%
1.149 +\hspace*{2ex}{\it equations}\\%
1.150 +\hspace*{2ex}\vdots\\%
1.151 +\cmd{by} @{text "pat_completeness auto"}\\%
1.152 +\cmd{termination by} @{text "lexicographic_order"}\vspace{6pt}
1.153 +\end{minipage}
1.154 +\right]\]
1.155 +
1.156 +\begin{isamarkuptext}
1.157 + \vspace*{1em}
1.158 + \noindent Some details have now become explicit:
1.159 +
1.160 + \begin{enumerate}
1.161 + \item The \cmd{sequential} option enables the preprocessing of
1.162 + pattern overlaps which we already saw. Without this option, the equations
1.163 + must already be disjoint and complete. The automatic completion only
1.164 + works with constructor patterns.
1.165 +
1.166 + \item A function definition produces a proof obligation which
1.167 + expresses completeness and compatibility of patterns (we talk about
1.168 + this later). The combination of the methods @{text "pat_completeness"} and
1.169 + @{text "auto"} is used to solve this proof obligation.
1.170 +
1.171 + \item A termination proof follows the definition, started by the
1.172 + \cmd{termination} command. This will be explained in \S\ref{termination}.
1.173 + \end{enumerate}
1.174 + Whenever a \cmd{fun} command fails, it is usually a good idea to
1.175 + expand the syntax to the more verbose \cmd{function} form, to see
1.176 + what is actually going on.
1.177 + *}
1.178 +
1.179 +
1.180 +section {* Termination *}
1.181 +
1.182 +text {*\label{termination}
1.183 + The method @{text "lexicographic_order"} is the default method for
1.184 + termination proofs. It can prove termination of a
1.185 + certain class of functions by searching for a suitable lexicographic
1.186 + combination of size measures. Of course, not all functions have such
1.187 + a simple termination argument. For them, we can specify the termination
1.188 + relation manually.
1.189 +*}
1.190 +
1.191 +subsection {* The {\tt relation} method *}
1.192 +text{*
1.193 + Consider the following function, which sums up natural numbers up to
1.194 + @{text "N"}, using a counter @{text "i"}:
1.195 +*}
1.196 +
1.197 +function sum :: "nat \<Rightarrow> nat \<Rightarrow> nat"
1.198 +where
1.199 + "sum i N = (if i > N then 0 else i + sum (Suc i) N)"
1.200 +by pat_completeness auto
1.201 +
1.202 +text {*
1.203 + \noindent The @{text "lexicographic_order"} method fails on this example, because none of the
1.204 + arguments decreases in the recursive call, with respect to the standard size ordering.
1.205 + To prove termination manually, we must provide a custom wellfounded relation.
1.206 +
1.207 + The termination argument for @{text "sum"} is based on the fact that
1.208 + the \emph{difference} between @{text "i"} and @{text "N"} gets
1.209 + smaller in every step, and that the recursion stops when @{text "i"}
1.210 + is greater than @{text "N"}. Phrased differently, the expression
1.211 + @{text "N + 1 - i"} always decreases.
1.212 +
1.213 + We can use this expression as a measure function suitable to prove termination.
1.214 +*}
1.215 +
1.216 +termination sum
1.217 +apply (relation "measure (\<lambda>(i,N). N + 1 - i)")
1.218 +
1.219 +txt {*
1.220 + The \cmd{termination} command sets up the termination goal for the
1.221 + specified function @{text "sum"}. If the function name is omitted, it
1.222 + implicitly refers to the last function definition.
1.223 +
1.224 + The @{text relation} method takes a relation of
1.225 + type @{typ "('a \<times> 'a) set"}, where @{typ "'a"} is the argument type of
1.226 + the function. If the function has multiple curried arguments, then
1.227 + these are packed together into a tuple, as it happened in the above
1.228 + example.
1.229 +
1.230 + The predefined function @{term[source] "measure :: ('a \<Rightarrow> nat) \<Rightarrow> ('a \<times> 'a) set"} constructs a
1.231 + wellfounded relation from a mapping into the natural numbers (a
1.232 + \emph{measure function}).
1.233 +
1.234 + After the invocation of @{text "relation"}, we must prove that (a)
1.235 + the relation we supplied is wellfounded, and (b) that the arguments
1.236 + of recursive calls indeed decrease with respect to the
1.237 + relation:
1.238 +
1.239 + @{subgoals[display,indent=0]}
1.240 +
1.241 + These goals are all solved by @{text "auto"}:
1.242 +*}
1.243 +
1.244 +apply auto
1.245 +done
1.246 +
1.247 +text {*
1.248 + Let us complicate the function a little, by adding some more
1.249 + recursive calls:
1.250 +*}
1.251 +
1.252 +function foo :: "nat \<Rightarrow> nat \<Rightarrow> nat"
1.253 +where
1.254 + "foo i N = (if i > N
1.255 + then (if N = 0 then 0 else foo 0 (N - 1))
1.256 + else i + foo (Suc i) N)"
1.257 +by pat_completeness auto
1.258 +
1.259 +text {*
1.260 + When @{text "i"} has reached @{text "N"}, it starts at zero again
1.261 + and @{text "N"} is decremented.
1.262 + This corresponds to a nested
1.263 + loop where one index counts up and the other down. Termination can
1.264 + be proved using a lexicographic combination of two measures, namely
1.265 + the value of @{text "N"} and the above difference. The @{const
1.266 + "measures"} combinator generalizes @{text "measure"} by taking a
1.267 + list of measure functions.
1.268 +*}
1.269 +
1.270 +termination
1.271 +by (relation "measures [\<lambda>(i, N). N, \<lambda>(i,N). N + 1 - i]") auto
1.272 +
1.273 +subsection {* How @{text "lexicographic_order"} works *}
1.274 +
1.275 +(*fun fails :: "nat \<Rightarrow> nat list \<Rightarrow> nat"
1.276 +where
1.277 + "fails a [] = a"
1.278 +| "fails a (x#xs) = fails (x + a) (x # xs)"
1.279 +*)
1.280 +
1.281 +text {*
1.282 + To see how the automatic termination proofs work, let's look at an
1.283 + example where it fails\footnote{For a detailed discussion of the
1.284 + termination prover, see \cite{bulwahnKN07}}:
1.285 +
1.286 +\end{isamarkuptext}
1.287 +\cmd{fun} @{text "fails :: \"nat \<Rightarrow> nat list \<Rightarrow> nat\""}\\%
1.288 +\cmd{where}\\%
1.289 +\hspace*{2ex}@{text "\"fails a [] = a\""}\\%
1.290 +|\hspace*{1.5ex}@{text "\"fails a (x#xs) = fails (x + a) (x#xs)\""}\\
1.291 +\begin{isamarkuptext}
1.292 +
1.293 +\noindent Isabelle responds with the following error:
1.294 +
1.295 +\begin{isabelle}
1.296 +*** Unfinished subgoals:\newline
1.297 +*** (a, 1, <):\newline
1.298 +*** \ 1.~@{text "\<And>x. x = 0"}\newline
1.299 +*** (a, 1, <=):\newline
1.300 +*** \ 1.~False\newline
1.301 +*** (a, 2, <):\newline
1.302 +*** \ 1.~False\newline
1.303 +*** Calls:\newline
1.304 +*** a) @{text "(a, x # xs) -->> (x + a, x # xs)"}\newline
1.305 +*** Measures:\newline
1.306 +*** 1) @{text "\<lambda>x. size (fst x)"}\newline
1.307 +*** 2) @{text "\<lambda>x. size (snd x)"}\newline
1.308 +*** Result matrix:\newline
1.309 +*** \ \ \ \ 1\ \ 2 \newline
1.310 +*** a: ? <= \newline
1.311 +*** Could not find lexicographic termination order.\newline
1.312 +*** At command "fun".\newline
1.313 +\end{isabelle}
1.314 +*}
1.315 +
1.316 +
1.317 +text {*
1.318 + The key to this error message is the matrix at the bottom. The rows
1.319 + of that matrix correspond to the different recursive calls (In our
1.320 + case, there is just one). The columns are the function's arguments
1.321 + (expressed through different measure functions, which map the
1.322 + argument tuple to a natural number).
1.323 +
1.324 + The contents of the matrix summarize what is known about argument
1.325 + descents: The second argument has a weak descent (@{text "<="}) at the
1.326 + recursive call, and for the first argument nothing could be proved,
1.327 + which is expressed by @{text "?"}. In general, there are the values
1.328 + @{text "<"}, @{text "<="} and @{text "?"}.
1.329 +
1.330 + For the failed proof attempts, the unfinished subgoals are also
1.331 + printed. Looking at these will often point to a missing lemma.
1.332 +
1.333 +% As a more real example, here is quicksort:
1.334 +*}
1.335 +(*
1.336 +function qs :: "nat list \<Rightarrow> nat list"
1.337 +where
1.338 + "qs [] = []"
1.339 +| "qs (x#xs) = qs [y\<in>xs. y < x] @ x # qs [y\<in>xs. y \<ge> x]"
1.340 +by pat_completeness auto
1.341 +
1.342 +termination apply lexicographic_order
1.343 +
1.344 +text {* If we try @{text "lexicographic_order"} method, we get the
1.345 + following error *}
1.346 +termination by (lexicographic_order simp:l2)
1.347 +
1.348 +lemma l: "x \<le> y \<Longrightarrow> x < Suc y" by arith
1.349 +
1.350 +function
1.351 +
1.352 +*)
1.353 +
1.354 +section {* Mutual Recursion *}
1.355 +
1.356 +text {*
1.357 + If two or more functions call one another mutually, they have to be defined
1.358 + in one step. Here are @{text "even"} and @{text "odd"}:
1.359 +*}
1.360 +
1.361 +function even :: "nat \<Rightarrow> bool"
1.362 + and odd :: "nat \<Rightarrow> bool"
1.363 +where
1.364 + "even 0 = True"
1.365 +| "odd 0 = False"
1.366 +| "even (Suc n) = odd n"
1.367 +| "odd (Suc n) = even n"
1.368 +by pat_completeness auto
1.369 +
1.370 +text {*
1.371 + To eliminate the mutual dependencies, Isabelle internally
1.372 + creates a single function operating on the sum
1.373 + type @{typ "nat + nat"}. Then, @{const even} and @{const odd} are
1.374 + defined as projections. Consequently, termination has to be proved
1.375 + simultaneously for both functions, by specifying a measure on the
1.376 + sum type:
1.377 +*}
1.378 +
1.379 +termination
1.380 +by (relation "measure (\<lambda>x. case x of Inl n \<Rightarrow> n | Inr n \<Rightarrow> n)") auto
1.381 +
1.382 +text {*
1.383 + We could also have used @{text lexicographic_order}, which
1.384 + supports mutual recursive termination proofs to a certain extent.
1.385 +*}
1.386 +
1.387 +subsection {* Induction for mutual recursion *}
1.388 +
1.389 +text {*
1.390 +
1.391 + When functions are mutually recursive, proving properties about them
1.392 + generally requires simultaneous induction. The induction rule @{text "even_odd.induct"}
1.393 + generated from the above definition reflects this.
1.394 +
1.395 + Let us prove something about @{const even} and @{const odd}:
1.396 +*}
1.397 +
1.398 +lemma even_odd_mod2:
1.399 + "even n = (n mod 2 = 0)"
1.400 + "odd n = (n mod 2 = 1)"
1.401 +
1.402 +txt {*
1.403 + We apply simultaneous induction, specifying the induction variable
1.404 + for both goals, separated by \cmd{and}: *}
1.405 +
1.406 +apply (induct n and n rule: even_odd.induct)
1.407 +
1.408 +txt {*
1.409 + We get four subgoals, which correspond to the clauses in the
1.410 + definition of @{const even} and @{const odd}:
1.411 + @{subgoals[display,indent=0]}
1.412 + Simplification solves the first two goals, leaving us with two
1.413 + statements about the @{text "mod"} operation to prove:
1.414 +*}
1.415 +
1.416 +apply simp_all
1.417 +
1.418 +txt {*
1.419 + @{subgoals[display,indent=0]}
1.420 +
1.421 + \noindent These can be handled by Isabelle's arithmetic decision procedures.
1.422 +
1.423 +*}
1.424 +
1.425 +apply arith
1.426 +apply arith
1.427 +done
1.428 +
1.429 +text {*
1.430 + In proofs like this, the simultaneous induction is really essential:
1.431 + Even if we are just interested in one of the results, the other
1.432 + one is necessary to strengthen the induction hypothesis. If we leave
1.433 + out the statement about @{const odd} and just write @{term True} instead,
1.434 + the same proof fails:
1.435 +*}
1.436 +
1.437 +lemma failed_attempt:
1.438 + "even n = (n mod 2 = 0)"
1.439 + "True"
1.440 +apply (induct n rule: even_odd.induct)
1.441 +
1.442 +txt {*
1.443 + \noindent Now the third subgoal is a dead end, since we have no
1.444 + useful induction hypothesis available:
1.445 +
1.446 + @{subgoals[display,indent=0]}
1.447 +*}
1.448 +
1.449 +oops
1.450 +
1.451 +section {* General pattern matching *}
1.452 +text{*\label{genpats} *}
1.453 +
1.454 +subsection {* Avoiding automatic pattern splitting *}
1.455 +
1.456 +text {*
1.457 +
1.458 + Up to now, we used pattern matching only on datatypes, and the
1.459 + patterns were always disjoint and complete, and if they weren't,
1.460 + they were made disjoint automatically like in the definition of
1.461 + @{const "sep"} in \S\ref{patmatch}.
1.462 +
1.463 + This automatic splitting can significantly increase the number of
1.464 + equations involved, and this is not always desirable. The following
1.465 + example shows the problem:
1.466 +
1.467 + Suppose we are modeling incomplete knowledge about the world by a
1.468 + three-valued datatype, which has values @{term "T"}, @{term "F"}
1.469 + and @{term "X"} for true, false and uncertain propositions, respectively.
1.470 +*}
1.471 +
1.472 +datatype P3 = T | F | X
1.473 +
1.474 +text {* \noindent Then the conjunction of such values can be defined as follows: *}
1.475 +
1.476 +fun And :: "P3 \<Rightarrow> P3 \<Rightarrow> P3"
1.477 +where
1.478 + "And T p = p"
1.479 +| "And p T = p"
1.480 +| "And p F = F"
1.481 +| "And F p = F"
1.482 +| "And X X = X"
1.483 +
1.484 +
1.485 +text {*
1.486 + This definition is useful, because the equations can directly be used
1.487 + as simplification rules. But the patterns overlap: For example,
1.488 + the expression @{term "And T T"} is matched by both the first and
1.489 + the second equation. By default, Isabelle makes the patterns disjoint by
1.490 + splitting them up, producing instances:
1.491 +*}
1.492 +
1.493 +thm And.simps
1.494 +
1.495 +text {*
1.496 + @{thm[indent=4] And.simps}
1.497 +
1.498 + \vspace*{1em}
1.499 + \noindent There are several problems with this:
1.500 +
1.501 + \begin{enumerate}
1.502 + \item If the datatype has many constructors, there can be an
1.503 + explosion of equations. For @{const "And"}, we get seven instead of
1.504 + five equations, which can be tolerated, but this is just a small
1.505 + example.
1.506 +
1.507 + \item Since splitting makes the equations \qt{less general}, they
1.508 + do not always match in rewriting. While the term @{term "And x F"}
1.509 + can be simplified to @{term "F"} with the original equations, a
1.510 + (manual) case split on @{term "x"} is now necessary.
1.511 +
1.512 + \item The splitting also concerns the induction rule @{text
1.513 + "And.induct"}. Instead of five premises it now has seven, which
1.514 + means that our induction proofs will have more cases.
1.515 +
1.516 + \item In general, it increases clarity if we get the same definition
1.517 + back which we put in.
1.518 + \end{enumerate}
1.519 +
1.520 + If we do not want the automatic splitting, we can switch it off by
1.521 + leaving out the \cmd{sequential} option. However, we will have to
1.522 + prove that our pattern matching is consistent\footnote{This prevents
1.523 + us from defining something like @{term "f x = True"} and @{term "f x
1.524 + = False"} simultaneously.}:
1.525 +*}
1.526 +
1.527 +function And2 :: "P3 \<Rightarrow> P3 \<Rightarrow> P3"
1.528 +where
1.529 + "And2 T p = p"
1.530 +| "And2 p T = p"
1.531 +| "And2 p F = F"
1.532 +| "And2 F p = F"
1.533 +| "And2 X X = X"
1.534 +
1.535 +txt {*
1.536 + \noindent Now let's look at the proof obligations generated by a
1.537 + function definition. In this case, they are:
1.538 +
1.539 + @{subgoals[display,indent=0]}\vspace{-1.2em}\hspace{3cm}\vdots\vspace{1.2em}
1.540 +
1.541 + The first subgoal expresses the completeness of the patterns. It has
1.542 + the form of an elimination rule and states that every @{term x} of
1.543 + the function's input type must match at least one of the patterns\footnote{Completeness could
1.544 + be equivalently stated as a disjunction of existential statements:
1.545 +@{term "(\<exists>p. x = (T, p)) \<or> (\<exists>p. x = (p, T)) \<or> (\<exists>p. x = (p, F)) \<or>
1.546 + (\<exists>p. x = (F, p)) \<or> (x = (X, X))"}, and you can use the method @{text atomize_elim} to get that form instead.}. If the patterns just involve
1.547 + datatypes, we can solve it with the @{text "pat_completeness"}
1.548 + method:
1.549 +*}
1.550 +
1.551 +apply pat_completeness
1.552 +
1.553 +txt {*
1.554 + The remaining subgoals express \emph{pattern compatibility}. We do
1.555 + allow that an input value matches multiple patterns, but in this
1.556 + case, the result (i.e.~the right hand sides of the equations) must
1.557 + also be equal. For each pair of two patterns, there is one such
1.558 + subgoal. Usually this needs injectivity of the constructors, which
1.559 + is used automatically by @{text "auto"}.
1.560 +*}
1.561 +
1.562 +by auto
1.563 +
1.564 +
1.565 +subsection {* Non-constructor patterns *}
1.566 +
1.567 +text {*
1.568 + Most of Isabelle's basic types take the form of inductive datatypes,
1.569 + and usually pattern matching works on the constructors of such types.
1.570 + However, this need not be always the case, and the \cmd{function}
1.571 + command handles other kind of patterns, too.
1.572 +
1.573 + One well-known instance of non-constructor patterns are
1.574 + so-called \emph{$n+k$-patterns}, which are a little controversial in
1.575 + the functional programming world. Here is the initial fibonacci
1.576 + example with $n+k$-patterns:
1.577 +*}
1.578 +
1.579 +function fib2 :: "nat \<Rightarrow> nat"
1.580 +where
1.581 + "fib2 0 = 1"
1.582 +| "fib2 1 = 1"
1.583 +| "fib2 (n + 2) = fib2 n + fib2 (Suc n)"
1.584 +
1.585 +(*<*)ML_val "goals_limit := 1"(*>*)
1.586 +txt {*
1.587 + This kind of matching is again justified by the proof of pattern
1.588 + completeness and compatibility.
1.589 + The proof obligation for pattern completeness states that every natural number is
1.590 + either @{term "0::nat"}, @{term "1::nat"} or @{term "n +
1.591 + (2::nat)"}:
1.592 +
1.593 + @{subgoals[display,indent=0]}
1.594 +
1.595 + This is an arithmetic triviality, but unfortunately the
1.596 + @{text arith} method cannot handle this specific form of an
1.597 + elimination rule. However, we can use the method @{text
1.598 + "atomize_elim"} to do an ad-hoc conversion to a disjunction of
1.599 + existentials, which can then be solved by the arithmetic decision procedure.
1.600 + Pattern compatibility and termination are automatic as usual.
1.601 +*}
1.602 +(*<*)ML_val "goals_limit := 10"(*>*)
1.603 +apply atomize_elim
1.604 +apply arith
1.605 +apply auto
1.606 +done
1.607 +termination by lexicographic_order
1.608 +text {*
1.609 + We can stretch the notion of pattern matching even more. The
1.610 + following function is not a sensible functional program, but a
1.611 + perfectly valid mathematical definition:
1.612 +*}
1.613 +
1.614 +function ev :: "nat \<Rightarrow> bool"
1.615 +where
1.616 + "ev (2 * n) = True"
1.617 +| "ev (2 * n + 1) = False"
1.618 +apply atomize_elim
1.619 +by arith+
1.620 +termination by (relation "{}") simp
1.621 +
1.622 +text {*
1.623 + This general notion of pattern matching gives you a certain freedom
1.624 + in writing down specifications. However, as always, such freedom should
1.625 + be used with care:
1.626 +
1.627 + If we leave the area of constructor
1.628 + patterns, we have effectively departed from the world of functional
1.629 + programming. This means that it is no longer possible to use the
1.630 + code generator, and expect it to generate ML code for our
1.631 + definitions. Also, such a specification might not work very well together with
1.632 + simplification. Your mileage may vary.
1.633 +*}
1.634 +
1.635 +
1.636 +subsection {* Conditional equations *}
1.637 +
1.638 +text {*
1.639 + The function package also supports conditional equations, which are
1.640 + similar to guards in a language like Haskell. Here is Euclid's
1.641 + algorithm written with conditional patterns\footnote{Note that the
1.642 + patterns are also overlapping in the base case}:
1.643 +*}
1.644 +
1.645 +function gcd :: "nat \<Rightarrow> nat \<Rightarrow> nat"
1.646 +where
1.647 + "gcd x 0 = x"
1.648 +| "gcd 0 y = y"
1.649 +| "x < y \<Longrightarrow> gcd (Suc x) (Suc y) = gcd (Suc x) (y - x)"
1.650 +| "\<not> x < y \<Longrightarrow> gcd (Suc x) (Suc y) = gcd (x - y) (Suc y)"
1.651 +by (atomize_elim, auto, arith)
1.652 +termination by lexicographic_order
1.653 +
1.654 +text {*
1.655 + By now, you can probably guess what the proof obligations for the
1.656 + pattern completeness and compatibility look like.
1.657 +
1.658 + Again, functions with conditional patterns are not supported by the
1.659 + code generator.
1.660 +*}
1.661 +
1.662 +
1.663 +subsection {* Pattern matching on strings *}
1.664 +
1.665 +text {*
1.666 + As strings (as lists of characters) are normal datatypes, pattern
1.667 + matching on them is possible, but somewhat problematic. Consider the
1.668 + following definition:
1.669 +
1.670 +\end{isamarkuptext}
1.671 +\noindent\cmd{fun} @{text "check :: \"string \<Rightarrow> bool\""}\\%
1.672 +\cmd{where}\\%
1.673 +\hspace*{2ex}@{text "\"check (''good'') = True\""}\\%
1.674 +@{text "| \"check s = False\""}
1.675 +\begin{isamarkuptext}
1.676 +
1.677 + \noindent An invocation of the above \cmd{fun} command does not
1.678 + terminate. What is the problem? Strings are lists of characters, and
1.679 + characters are a datatype with a lot of constructors. Splitting the
1.680 + catch-all pattern thus leads to an explosion of cases, which cannot
1.681 + be handled by Isabelle.
1.682 +
1.683 + There are two things we can do here. Either we write an explicit
1.684 + @{text "if"} on the right hand side, or we can use conditional patterns:
1.685 +*}
1.686 +
1.687 +function check :: "string \<Rightarrow> bool"
1.688 +where
1.689 + "check (''good'') = True"
1.690 +| "s \<noteq> ''good'' \<Longrightarrow> check s = False"
1.691 +by auto
1.692 +
1.693 +
1.694 +section {* Partiality *}
1.695 +
1.696 +text {*
1.697 + In HOL, all functions are total. A function @{term "f"} applied to
1.698 + @{term "x"} always has the value @{term "f x"}, and there is no notion
1.699 + of undefinedness.
1.700 + This is why we have to do termination
1.701 + proofs when defining functions: The proof justifies that the
1.702 + function can be defined by wellfounded recursion.
1.703 +
1.704 + However, the \cmd{function} package does support partiality to a
1.705 + certain extent. Let's look at the following function which looks
1.706 + for a zero of a given function f.
1.707 +*}
1.708 +
1.709 +function (*<*)(domintros, tailrec)(*>*)findzero :: "(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat"
1.710 +where
1.711 + "findzero f n = (if f n = 0 then n else findzero f (Suc n))"
1.712 +by pat_completeness auto
1.713 +(*<*)declare findzero.simps[simp del](*>*)
1.714 +
1.715 +text {*
1.716 + \noindent Clearly, any attempt of a termination proof must fail. And without
1.717 + that, we do not get the usual rules @{text "findzero.simps"} and
1.718 + @{text "findzero.induct"}. So what was the definition good for at all?
1.719 +*}
1.720 +
1.721 +subsection {* Domain predicates *}
1.722 +
1.723 +text {*
1.724 + The trick is that Isabelle has not only defined the function @{const findzero}, but also
1.725 + a predicate @{term "findzero_dom"} that characterizes the values where the function
1.726 + terminates: the \emph{domain} of the function. If we treat a
1.727 + partial function just as a total function with an additional domain
1.728 + predicate, we can derive simplification and
1.729 + induction rules as we do for total functions. They are guarded
1.730 + by domain conditions and are called @{text psimps} and @{text
1.731 + pinduct}:
1.732 +*}
1.733 +
1.734 +text {*
1.735 + \noindent\begin{minipage}{0.79\textwidth}@{thm[display,margin=85] findzero.psimps}\end{minipage}
1.736 + \hfill(@{text "findzero.psimps"})
1.737 + \vspace{1em}
1.738 +
1.739 + \noindent\begin{minipage}{0.79\textwidth}@{thm[display,margin=85] findzero.pinduct}\end{minipage}
1.740 + \hfill(@{text "findzero.pinduct"})
1.741 +*}
1.742 +
1.743 +text {*
1.744 + Remember that all we
1.745 + are doing here is use some tricks to make a total function appear
1.746 + as if it was partial. We can still write the term @{term "findzero
1.747 + (\<lambda>x. 1) 0"} and like any other term of type @{typ nat} it is equal
1.748 + to some natural number, although we might not be able to find out
1.749 + which one. The function is \emph{underdefined}.
1.750 +
1.751 + But it is defined enough to prove something interesting about it. We
1.752 + can prove that if @{term "findzero f n"}
1.753 + terminates, it indeed returns a zero of @{term f}:
1.754 +*}
1.755 +
1.756 +lemma findzero_zero: "findzero_dom (f, n) \<Longrightarrow> f (findzero f n) = 0"
1.757 +
1.758 +txt {* \noindent We apply induction as usual, but using the partial induction
1.759 + rule: *}
1.760 +
1.761 +apply (induct f n rule: findzero.pinduct)
1.762 +
1.763 +txt {* \noindent This gives the following subgoals:
1.764 +
1.765 + @{subgoals[display,indent=0]}
1.766 +
1.767 + \noindent The hypothesis in our lemma was used to satisfy the first premise in
1.768 + the induction rule. However, we also get @{term
1.769 + "findzero_dom (f, n)"} as a local assumption in the induction step. This
1.770 + allows to unfold @{term "findzero f n"} using the @{text psimps}
1.771 + rule, and the rest is trivial. Since the @{text psimps} rules carry the
1.772 + @{text "[simp]"} attribute by default, we just need a single step:
1.773 + *}
1.774 +apply simp
1.775 +done
1.776 +
1.777 +text {*
1.778 + Proofs about partial functions are often not harder than for total
1.779 + functions. Fig.~\ref{findzero_isar} shows a slightly more
1.780 + complicated proof written in Isar. It is verbose enough to show how
1.781 + partiality comes into play: From the partial induction, we get an
1.782 + additional domain condition hypothesis. Observe how this condition
1.783 + is applied when calls to @{term findzero} are unfolded.
1.784 +*}
1.785 +
1.786 +text_raw {*
1.787 +\begin{figure}
1.788 +\hrule\vspace{6pt}
1.789 +\begin{minipage}{0.8\textwidth}
1.790 +\isabellestyle{it}
1.791 +\isastyle\isamarkuptrue
1.792 +*}
1.793 +lemma "\<lbrakk>findzero_dom (f, n); x \<in> {n ..< findzero f n}\<rbrakk> \<Longrightarrow> f x \<noteq> 0"
1.794 +proof (induct rule: findzero.pinduct)
1.795 + fix f n assume dom: "findzero_dom (f, n)"
1.796 + and IH: "\<lbrakk>f n \<noteq> 0; x \<in> {Suc n ..< findzero f (Suc n)}\<rbrakk> \<Longrightarrow> f x \<noteq> 0"
1.797 + and x_range: "x \<in> {n ..< findzero f n}"
1.798 + have "f n \<noteq> 0"
1.799 + proof
1.800 + assume "f n = 0"
1.801 + with dom have "findzero f n = n" by simp
1.802 + with x_range show False by auto
1.803 + qed
1.804 +
1.805 + from x_range have "x = n \<or> x \<in> {Suc n ..< findzero f n}" by auto
1.806 + thus "f x \<noteq> 0"
1.807 + proof
1.808 + assume "x = n"
1.809 + with `f n \<noteq> 0` show ?thesis by simp
1.810 + next
1.811 + assume "x \<in> {Suc n ..< findzero f n}"
1.812 + with dom and `f n \<noteq> 0` have "x \<in> {Suc n ..< findzero f (Suc n)}" by simp
1.813 + with IH and `f n \<noteq> 0`
1.814 + show ?thesis by simp
1.815 + qed
1.816 +qed
1.817 +text_raw {*
1.818 +\isamarkupfalse\isabellestyle{tt}
1.819 +\end{minipage}\vspace{6pt}\hrule
1.820 +\caption{A proof about a partial function}\label{findzero_isar}
1.821 +\end{figure}
1.822 +*}
1.823 +
1.824 +subsection {* Partial termination proofs *}
1.825 +
1.826 +text {*
1.827 + Now that we have proved some interesting properties about our
1.828 + function, we should turn to the domain predicate and see if it is
1.829 + actually true for some values. Otherwise we would have just proved
1.830 + lemmas with @{term False} as a premise.
1.831 +
1.832 + Essentially, we need some introduction rules for @{text
1.833 + findzero_dom}. The function package can prove such domain
1.834 + introduction rules automatically. But since they are not used very
1.835 + often (they are almost never needed if the function is total), this
1.836 + functionality is disabled by default for efficiency reasons. So we have to go
1.837 + back and ask for them explicitly by passing the @{text
1.838 + "(domintros)"} option to the function package:
1.839 +
1.840 +\vspace{1ex}
1.841 +\noindent\cmd{function} @{text "(domintros) findzero :: \"(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat\""}\\%
1.842 +\cmd{where}\isanewline%
1.843 +\ \ \ldots\\
1.844 +
1.845 + \noindent Now the package has proved an introduction rule for @{text findzero_dom}:
1.846 +*}
1.847 +
1.848 +thm findzero.domintros
1.849 +
1.850 +text {*
1.851 + @{thm[display] findzero.domintros}
1.852 +
1.853 + Domain introduction rules allow to show that a given value lies in the
1.854 + domain of a function, if the arguments of all recursive calls
1.855 + are in the domain as well. They allow to do a \qt{single step} in a
1.856 + termination proof. Usually, you want to combine them with a suitable
1.857 + induction principle.
1.858 +
1.859 + Since our function increases its argument at recursive calls, we
1.860 + need an induction principle which works \qt{backwards}. We will use
1.861 + @{text inc_induct}, which allows to do induction from a fixed number
1.862 + \qt{downwards}:
1.863 +
1.864 + \begin{center}@{thm inc_induct}\hfill(@{text "inc_induct"})\end{center}
1.865 +
1.866 + Figure \ref{findzero_term} gives a detailed Isar proof of the fact
1.867 + that @{text findzero} terminates if there is a zero which is greater
1.868 + or equal to @{term n}. First we derive two useful rules which will
1.869 + solve the base case and the step case of the induction. The
1.870 + induction is then straightforward, except for the unusual induction
1.871 + principle.
1.872 +
1.873 +*}
1.874 +
1.875 +text_raw {*
1.876 +\begin{figure}
1.877 +\hrule\vspace{6pt}
1.878 +\begin{minipage}{0.8\textwidth}
1.879 +\isabellestyle{it}
1.880 +\isastyle\isamarkuptrue
1.881 +*}
1.882 +lemma findzero_termination:
1.883 + assumes "x \<ge> n" and "f x = 0"
1.884 + shows "findzero_dom (f, n)"
1.885 +proof -
1.886 + have base: "findzero_dom (f, x)"
1.887 + by (rule findzero.domintros) (simp add:`f x = 0`)
1.888 +
1.889 + have step: "\<And>i. findzero_dom (f, Suc i)
1.890 + \<Longrightarrow> findzero_dom (f, i)"
1.891 + by (rule findzero.domintros) simp
1.892 +
1.893 + from `x \<ge> n` show ?thesis
1.894 + proof (induct rule:inc_induct)
1.895 + show "findzero_dom (f, x)" by (rule base)
1.896 + next
1.897 + fix i assume "findzero_dom (f, Suc i)"
1.898 + thus "findzero_dom (f, i)" by (rule step)
1.899 + qed
1.900 +qed
1.901 +text_raw {*
1.902 +\isamarkupfalse\isabellestyle{tt}
1.903 +\end{minipage}\vspace{6pt}\hrule
1.904 +\caption{Termination proof for @{text findzero}}\label{findzero_term}
1.905 +\end{figure}
1.906 +*}
1.907 +
1.908 +text {*
1.909 + Again, the proof given in Fig.~\ref{findzero_term} has a lot of
1.910 + detail in order to explain the principles. Using more automation, we
1.911 + can also have a short proof:
1.912 +*}
1.913 +
1.914 +lemma findzero_termination_short:
1.915 + assumes zero: "x >= n"
1.916 + assumes [simp]: "f x = 0"
1.917 + shows "findzero_dom (f, n)"
1.918 +using zero
1.919 +by (induct rule:inc_induct) (auto intro: findzero.domintros)
1.920 +
1.921 +text {*
1.922 + \noindent It is simple to combine the partial correctness result with the
1.923 + termination lemma:
1.924 +*}
1.925 +
1.926 +lemma findzero_total_correctness:
1.927 + "f x = 0 \<Longrightarrow> f (findzero f 0) = 0"
1.928 +by (blast intro: findzero_zero findzero_termination)
1.929 +
1.930 +subsection {* Definition of the domain predicate *}
1.931 +
1.932 +text {*
1.933 + Sometimes it is useful to know what the definition of the domain
1.934 + predicate looks like. Actually, @{text findzero_dom} is just an
1.935 + abbreviation:
1.936 +
1.937 + @{abbrev[display] findzero_dom}
1.938 +
1.939 + The domain predicate is the \emph{accessible part} of a relation @{const
1.940 + findzero_rel}, which was also created internally by the function
1.941 + package. @{const findzero_rel} is just a normal
1.942 + inductive predicate, so we can inspect its definition by
1.943 + looking at the introduction rules @{text findzero_rel.intros}.
1.944 + In our case there is just a single rule:
1.945 +
1.946 + @{thm[display] findzero_rel.intros}
1.947 +
1.948 + The predicate @{const findzero_rel}
1.949 + describes the \emph{recursion relation} of the function
1.950 + definition. The recursion relation is a binary relation on
1.951 + the arguments of the function that relates each argument to its
1.952 + recursive calls. In general, there is one introduction rule for each
1.953 + recursive call.
1.954 +
1.955 + The predicate @{term "accp findzero_rel"} is the accessible part of
1.956 + that relation. An argument belongs to the accessible part, if it can
1.957 + be reached in a finite number of steps (cf.~its definition in @{text
1.958 + "Wellfounded.thy"}).
1.959 +
1.960 + Since the domain predicate is just an abbreviation, you can use
1.961 + lemmas for @{const accp} and @{const findzero_rel} directly. Some
1.962 + lemmas which are occasionally useful are @{text accpI}, @{text
1.963 + accp_downward}, and of course the introduction and elimination rules
1.964 + for the recursion relation @{text "findzero.intros"} and @{text "findzero.cases"}.
1.965 +*}
1.966 +
1.967 +(*lemma findzero_nicer_domintros:
1.968 + "f x = 0 \<Longrightarrow> findzero_dom (f, x)"
1.969 + "findzero_dom (f, Suc x) \<Longrightarrow> findzero_dom (f, x)"
1.970 +by (rule accpI, erule findzero_rel.cases, auto)+
1.971 +*)
1.972 +
1.973 +subsection {* A Useful Special Case: Tail recursion *}
1.974 +
1.975 +text {*
1.976 + The domain predicate is our trick that allows us to model partiality
1.977 + in a world of total functions. The downside of this is that we have
1.978 + to carry it around all the time. The termination proof above allowed
1.979 + us to replace the abstract @{term "findzero_dom (f, n)"} by the more
1.980 + concrete @{term "(x \<ge> n \<and> f x = (0::nat))"}, but the condition is still
1.981 + there and can only be discharged for special cases.
1.982 + In particular, the domain predicate guards the unfolding of our
1.983 + function, since it is there as a condition in the @{text psimp}
1.984 + rules.
1.985 +
1.986 + Now there is an important special case: We can actually get rid
1.987 + of the condition in the simplification rules, \emph{if the function
1.988 + is tail-recursive}. The reason is that for all tail-recursive
1.989 + equations there is a total function satisfying them, even if they
1.990 + are non-terminating.
1.991 +
1.992 +% A function is tail recursive, if each call to the function is either
1.993 +% equal
1.994 +%
1.995 +% So the outer form of the
1.996 +%
1.997 +%if it can be written in the following
1.998 +% form:
1.999 +% {term[display] "f x = (if COND x then BASE x else f (LOOP x))"}
1.1000 +
1.1001 +
1.1002 + The function package internally does the right construction and can
1.1003 + derive the unconditional simp rules, if we ask it to do so. Luckily,
1.1004 + our @{const "findzero"} function is tail-recursive, so we can just go
1.1005 + back and add another option to the \cmd{function} command:
1.1006 +
1.1007 +\vspace{1ex}
1.1008 +\noindent\cmd{function} @{text "(domintros, tailrec) findzero :: \"(nat \<Rightarrow> nat) \<Rightarrow> nat \<Rightarrow> nat\""}\\%
1.1009 +\cmd{where}\isanewline%
1.1010 +\ \ \ldots\\%
1.1011 +
1.1012 +
1.1013 + \noindent Now, we actually get unconditional simplification rules, even
1.1014 + though the function is partial:
1.1015 +*}
1.1016 +
1.1017 +thm findzero.simps
1.1018 +
1.1019 +text {*
1.1020 + @{thm[display] findzero.simps}
1.1021 +
1.1022 + \noindent Of course these would make the simplifier loop, so we better remove
1.1023 + them from the simpset:
1.1024 +*}
1.1025 +
1.1026 +declare findzero.simps[simp del]
1.1027 +
1.1028 +text {*
1.1029 + Getting rid of the domain conditions in the simplification rules is
1.1030 + not only useful because it simplifies proofs. It is also required in
1.1031 + order to use Isabelle's code generator to generate ML code
1.1032 + from a function definition.
1.1033 + Since the code generator only works with equations, it cannot be
1.1034 + used with @{text "psimp"} rules. Thus, in order to generate code for
1.1035 + partial functions, they must be defined as a tail recursion.
1.1036 + Luckily, many functions have a relatively natural tail recursive
1.1037 + definition.
1.1038 +*}
1.1039 +
1.1040 +section {* Nested recursion *}
1.1041 +
1.1042 +text {*
1.1043 + Recursive calls which are nested in one another frequently cause
1.1044 + complications, since their termination proof can depend on a partial
1.1045 + correctness property of the function itself.
1.1046 +
1.1047 + As a small example, we define the \qt{nested zero} function:
1.1048 +*}
1.1049 +
1.1050 +function nz :: "nat \<Rightarrow> nat"
1.1051 +where
1.1052 + "nz 0 = 0"
1.1053 +| "nz (Suc n) = nz (nz n)"
1.1054 +by pat_completeness auto
1.1055 +
1.1056 +text {*
1.1057 + If we attempt to prove termination using the identity measure on
1.1058 + naturals, this fails:
1.1059 +*}
1.1060 +
1.1061 +termination
1.1062 + apply (relation "measure (\<lambda>n. n)")
1.1063 + apply auto
1.1064 +
1.1065 +txt {*
1.1066 + We get stuck with the subgoal
1.1067 +
1.1068 + @{subgoals[display]}
1.1069 +
1.1070 + Of course this statement is true, since we know that @{const nz} is
1.1071 + the zero function. And in fact we have no problem proving this
1.1072 + property by induction.
1.1073 +*}
1.1074 +(*<*)oops(*>*)
1.1075 +lemma nz_is_zero: "nz_dom n \<Longrightarrow> nz n = 0"
1.1076 + by (induct rule:nz.pinduct) auto
1.1077 +
1.1078 +text {*
1.1079 + We formulate this as a partial correctness lemma with the condition
1.1080 + @{term "nz_dom n"}. This allows us to prove it with the @{text
1.1081 + pinduct} rule before we have proved termination. With this lemma,
1.1082 + the termination proof works as expected:
1.1083 +*}
1.1084 +
1.1085 +termination
1.1086 + by (relation "measure (\<lambda>n. n)") (auto simp: nz_is_zero)
1.1087 +
1.1088 +text {*
1.1089 + As a general strategy, one should prove the statements needed for
1.1090 + termination as a partial property first. Then they can be used to do
1.1091 + the termination proof. This also works for less trivial
1.1092 + examples. Figure \ref{f91} defines the 91-function, a well-known
1.1093 + challenge problem due to John McCarthy, and proves its termination.
1.1094 +*}
1.1095 +
1.1096 +text_raw {*
1.1097 +\begin{figure}
1.1098 +\hrule\vspace{6pt}
1.1099 +\begin{minipage}{0.8\textwidth}
1.1100 +\isabellestyle{it}
1.1101 +\isastyle\isamarkuptrue
1.1102 +*}
1.1103 +
1.1104 +function f91 :: "nat \<Rightarrow> nat"
1.1105 +where
1.1106 + "f91 n = (if 100 < n then n - 10 else f91 (f91 (n + 11)))"
1.1107 +by pat_completeness auto
1.1108 +
1.1109 +lemma f91_estimate:
1.1110 + assumes trm: "f91_dom n"
1.1111 + shows "n < f91 n + 11"
1.1112 +using trm by induct auto
1.1113 +
1.1114 +termination
1.1115 +proof
1.1116 + let ?R = "measure (\<lambda>x. 101 - x)"
1.1117 + show "wf ?R" ..
1.1118 +
1.1119 + fix n :: nat assume "\<not> 100 < n" -- "Assumptions for both calls"
1.1120 +
1.1121 + thus "(n + 11, n) \<in> ?R" by simp -- "Inner call"
1.1122 +
1.1123 + assume inner_trm: "f91_dom (n + 11)" -- "Outer call"
1.1124 + with f91_estimate have "n + 11 < f91 (n + 11) + 11" .
1.1125 + with `\<not> 100 < n` show "(f91 (n + 11), n) \<in> ?R" by simp
1.1126 +qed
1.1127 +
1.1128 +text_raw {*
1.1129 +\isamarkupfalse\isabellestyle{tt}
1.1130 +\end{minipage}
1.1131 +\vspace{6pt}\hrule
1.1132 +\caption{McCarthy's 91-function}\label{f91}
1.1133 +\end{figure}
1.1134 +*}
1.1135 +
1.1136 +
1.1137 +section {* Higher-Order Recursion *}
1.1138 +
1.1139 +text {*
1.1140 + Higher-order recursion occurs when recursive calls
1.1141 + are passed as arguments to higher-order combinators such as @{const
1.1142 + map}, @{term filter} etc.
1.1143 + As an example, imagine a datatype of n-ary trees:
1.1144 +*}
1.1145 +
1.1146 +datatype 'a tree =
1.1147 + Leaf 'a
1.1148 +| Branch "'a tree list"
1.1149 +
1.1150 +
1.1151 +text {* \noindent We can define a function which swaps the left and right subtrees recursively, using the
1.1152 + list functions @{const rev} and @{const map}: *}
1.1153 +
1.1154 +fun mirror :: "'a tree \<Rightarrow> 'a tree"
1.1155 +where
1.1156 + "mirror (Leaf n) = Leaf n"
1.1157 +| "mirror (Branch l) = Branch (rev (map mirror l))"
1.1158 +
1.1159 +text {*
1.1160 + Although the definition is accepted without problems, let us look at the termination proof:
1.1161 +*}
1.1162 +
1.1163 +termination proof
1.1164 + txt {*
1.1165 +
1.1166 + As usual, we have to give a wellfounded relation, such that the
1.1167 + arguments of the recursive calls get smaller. But what exactly are
1.1168 + the arguments of the recursive calls when mirror is given as an
1.1169 + argument to @{const map}? Isabelle gives us the
1.1170 + subgoals
1.1171 +
1.1172 + @{subgoals[display,indent=0]}
1.1173 +
1.1174 + So the system seems to know that @{const map} only
1.1175 + applies the recursive call @{term "mirror"} to elements
1.1176 + of @{term "l"}, which is essential for the termination proof.
1.1177 +
1.1178 + This knowledge about @{const map} is encoded in so-called congruence rules,
1.1179 + which are special theorems known to the \cmd{function} command. The
1.1180 + rule for @{const map} is
1.1181 +
1.1182 + @{thm[display] map_cong}
1.1183 +
1.1184 + You can read this in the following way: Two applications of @{const
1.1185 + map} are equal, if the list arguments are equal and the functions
1.1186 + coincide on the elements of the list. This means that for the value
1.1187 + @{term "map f l"} we only have to know how @{term f} behaves on
1.1188 + the elements of @{term l}.
1.1189 +
1.1190 + Usually, one such congruence rule is
1.1191 + needed for each higher-order construct that is used when defining
1.1192 + new functions. In fact, even basic functions like @{const
1.1193 + If} and @{const Let} are handled by this mechanism. The congruence
1.1194 + rule for @{const If} states that the @{text then} branch is only
1.1195 + relevant if the condition is true, and the @{text else} branch only if it
1.1196 + is false:
1.1197 +
1.1198 + @{thm[display] if_cong}
1.1199 +
1.1200 + Congruence rules can be added to the
1.1201 + function package by giving them the @{term fundef_cong} attribute.
1.1202 +
1.1203 + The constructs that are predefined in Isabelle, usually
1.1204 + come with the respective congruence rules.
1.1205 + But if you define your own higher-order functions, you may have to
1.1206 + state and prove the required congruence rules yourself, if you want to use your
1.1207 + functions in recursive definitions.
1.1208 +*}
1.1209 +(*<*)oops(*>*)
1.1210 +
1.1211 +subsection {* Congruence Rules and Evaluation Order *}
1.1212 +
1.1213 +text {*
1.1214 + Higher order logic differs from functional programming languages in
1.1215 + that it has no built-in notion of evaluation order. A program is
1.1216 + just a set of equations, and it is not specified how they must be
1.1217 + evaluated.
1.1218 +
1.1219 + However for the purpose of function definition, we must talk about
1.1220 + evaluation order implicitly, when we reason about termination.
1.1221 + Congruence rules express that a certain evaluation order is
1.1222 + consistent with the logical definition.
1.1223 +
1.1224 + Consider the following function.
1.1225 +*}
1.1226 +
1.1227 +function f :: "nat \<Rightarrow> bool"
1.1228 +where
1.1229 + "f n = (n = 0 \<or> f (n - 1))"
1.1230 +(*<*)by pat_completeness auto(*>*)
1.1231 +
1.1232 +text {*
1.1233 + For this definition, the termination proof fails. The default configuration
1.1234 + specifies no congruence rule for disjunction. We have to add a
1.1235 + congruence rule that specifies left-to-right evaluation order:
1.1236 +
1.1237 + \vspace{1ex}
1.1238 + \noindent @{thm disj_cong}\hfill(@{text "disj_cong"})
1.1239 + \vspace{1ex}
1.1240 +
1.1241 + Now the definition works without problems. Note how the termination
1.1242 + proof depends on the extra condition that we get from the congruence
1.1243 + rule.
1.1244 +
1.1245 + However, as evaluation is not a hard-wired concept, we
1.1246 + could just turn everything around by declaring a different
1.1247 + congruence rule. Then we can make the reverse definition:
1.1248 +*}
1.1249 +
1.1250 +lemma disj_cong2[fundef_cong]:
1.1251 + "(\<not> Q' \<Longrightarrow> P = P') \<Longrightarrow> (Q = Q') \<Longrightarrow> (P \<or> Q) = (P' \<or> Q')"
1.1252 + by blast
1.1253 +
1.1254 +fun f' :: "nat \<Rightarrow> bool"
1.1255 +where
1.1256 + "f' n = (f' (n - 1) \<or> n = 0)"
1.1257 +
1.1258 +text {*
1.1259 + \noindent These examples show that, in general, there is no \qt{best} set of
1.1260 + congruence rules.
1.1261 +
1.1262 + However, such tweaking should rarely be necessary in
1.1263 + practice, as most of the time, the default set of congruence rules
1.1264 + works well.
1.1265 +*}
1.1266 +
1.1267 +end