1.1 --- a/doc-src/TutorialI/Advanced/Partial.thy Thu Aug 09 10:17:45 2001 +0200
1.2 +++ b/doc-src/TutorialI/Advanced/Partial.thy Thu Aug 09 18:12:15 2001 +0200
1.3 @@ -1,7 +1,7 @@
1.4 (*<*)theory Partial = While_Combinator:(*>*)
1.5
1.6 -text{*\noindent Throughout the tutorial we have emphasized the fact
1.7 -that all functions in HOL are total. Hence we cannot hope to define
1.8 +text{*\noindent Throughout this tutorial, we have emphasized
1.9 +that all functions in HOL are total. We cannot hope to define
1.10 truly partial functions, but must make them total. A straightforward
1.11 method is to lift the result type of the function from $\tau$ to
1.12 $\tau$~@{text option} (see \ref{sec:option}), where @{term None} is
1.13 @@ -45,7 +45,9 @@
1.14
1.15 subsubsection{*Guarded Recursion*}
1.16
1.17 -text{* Neither \isacommand{primrec} nor \isacommand{recdef} allow to
1.18 +text{*
1.19 +\index{recursion!guarded}%
1.20 +Neither \isacommand{primrec} nor \isacommand{recdef} allow to
1.21 prefix an equation with a condition in the way ordinary definitions do
1.22 (see @{term minus} above). Instead we have to move the condition over
1.23 to the right-hand side of the equation. Given a partial function $f$
1.24 @@ -73,7 +75,7 @@
1.25 As a more substantial example we consider the problem of searching a graph.
1.26 For simplicity our graph is given by a function @{term f} of
1.27 type @{typ"'a \<Rightarrow> 'a"} which
1.28 -maps each node to its successor, i.e.\ the graph is really a tree.
1.29 +maps each node to its successor; the graph is really a tree.
1.30 The task is to find the end of a chain, modelled by a node pointing to
1.31 itself. Here is a first attempt:
1.32 @{prop[display]"find(f,x) = (if f x = x then x else find(f, f x))"}
1.33 @@ -118,21 +120,21 @@
1.34 f"}. The proof requires unfolding the definition of @{term step1},
1.35 as specified in the \isacommand{hints} above.
1.36
1.37 -Normally you will then derive the following conditional variant of and from
1.38 -the recursion equation
1.39 +Normally you will then derive the following conditional variant from
1.40 +the recursion equation:
1.41 *}
1.42
1.43 lemma [simp]:
1.44 "wf(step1 f) \<Longrightarrow> find(f,x) = (if f x = x then x else find(f, f x))"
1.45 by simp
1.46
1.47 -text{*\noindent and then disable the original recursion equation:*}
1.48 +text{*\noindent Then you should disable the original recursion equation:*}
1.49
1.50 declare find.simps[simp del]
1.51
1.52 text{*
1.53 -We can reason about such underdefined functions just like about any other
1.54 -recursive function. Here is a simple example of recursion induction:
1.55 +Reasoning about such underdefined functions is like that for other
1.56 +recursive functions. Here is a simple example of recursion induction:
1.57 *}
1.58
1.59 lemma "wf(step1 f) \<longrightarrow> f(find(f,x)) = find(f,x)"
1.60 @@ -154,8 +156,8 @@
1.61 \begin{verbatim}
1.62 x := s; while b(x) do x := c(x); return x
1.63 \end{verbatim}
1.64 -In general, @{term s} will be a tuple (better still: a record). As an example
1.65 -consider the following definition of function @{term find} above:
1.66 +In general, @{term s} will be a tuple or record. As an example
1.67 +consider the following definition of function @{term find}:
1.68 *}
1.69
1.70 constdefs find2 :: "('a \<Rightarrow> 'a) \<Rightarrow> 'a \<Rightarrow> 'a"
1.71 @@ -171,7 +173,7 @@
1.72 Although the definition of tail recursive functions via @{term while} avoids
1.73 termination proofs, there is no free lunch. When proving properties of
1.74 functions defined by @{term while}, termination rears its ugly head
1.75 -again. Here is @{thm[source]while_rule}, the well known proof rule for total
1.76 +again. Here is \tdx{while_rule}, the well known proof rule for total
1.77 correctness of loops expressed with @{term while}:
1.78 @{thm[display,margin=50]while_rule[no_vars]} @{term P} needs to be true of
1.79 the initial state @{term s} and invariant under @{term c} (premises 1
1.80 @@ -205,7 +207,7 @@
1.81
1.82 text{* Let us conclude this section on partial functions by a
1.83 discussion of the merits of the @{term while} combinator. We have
1.84 -already seen that the advantage (if it is one) of not having to
1.85 +already seen that the advantage of not having to
1.86 provide a termination argument when defining a function via @{term
1.87 while} merely puts off the evil hour. On top of that, tail recursive
1.88 functions tend to be more complicated to reason about. So why use
2.1 --- a/doc-src/TutorialI/Advanced/WFrec.thy Thu Aug 09 10:17:45 2001 +0200
2.2 +++ b/doc-src/TutorialI/Advanced/WFrec.thy Thu Aug 09 18:12:15 2001 +0200
2.3 @@ -2,7 +2,7 @@
2.4
2.5 text{*\noindent
2.6 So far, all recursive definitions were shown to terminate via measure
2.7 -functions. Sometimes this can be quite inconvenient or even
2.8 +functions. Sometimes this can be inconvenient or
2.9 impossible. Fortunately, \isacommand{recdef} supports much more
2.10 general definitions. For example, termination of Ackermann's function
2.11 can be shown by means of the \rmindex{lexicographic product} @{text"<*lex*>"}:
2.12 @@ -23,18 +23,19 @@
2.13 In general, \isacommand{recdef} supports termination proofs based on
2.14 arbitrary well-founded relations as introduced in \S\ref{sec:Well-founded}.
2.15 This is called \textbf{well-founded
2.16 -recursion}\indexbold{recursion!well-founded}. Clearly, a function definition
2.17 -is total iff the set of all pairs $(r,l)$, where $l$ is the argument on the
2.18 +recursion}\indexbold{recursion!well-founded}. A function definition
2.19 +is total if and only if the set of
2.20 +all pairs $(r,l)$, where $l$ is the argument on the
2.21 left-hand side of an equation and $r$ the argument of some recursive call on
2.22 the corresponding right-hand side, induces a well-founded relation. For a
2.23 systematic account of termination proofs via well-founded relations see, for
2.24 example, Baader and Nipkow~\cite{Baader-Nipkow}.
2.25
2.26 -Each \isacommand{recdef} definition should be accompanied (after the name of
2.27 -the function) by a well-founded relation on the argument type of the
2.28 -function. Isabelle/HOL formalizes some of the most important
2.29 +Each \isacommand{recdef} definition should be accompanied (after the function's
2.30 +name) by a well-founded relation on the function's argument type.
2.31 +Isabelle/HOL formalizes some of the most important
2.32 constructions of well-founded relations (see \S\ref{sec:Well-founded}). For
2.33 -example, @{term"measure f"} is always well-founded, and the lexicographic
2.34 +example, @{term"measure f"} is always well-founded. The lexicographic
2.35 product of two well-founded relations is again well-founded, which we relied
2.36 on when defining Ackermann's function above.
2.37 Of course the lexicographic product can also be iterated:
2.38 @@ -55,8 +56,8 @@
2.39 inv_image}. All these constructions are known to \isacommand{recdef}. Thus you
2.40 will never have to prove well-foundedness of any relation composed
2.41 solely of these building blocks. But of course the proof of
2.42 -termination of your function definition, i.e.\ that the arguments
2.43 -decrease with every recursive call, may still require you to provide
2.44 +termination of your function definition --- that the arguments
2.45 +decrease with every recursive call --- may still require you to provide
2.46 additional lemmas.
2.47
2.48 It is also possible to use your own well-founded relations with
2.49 @@ -79,7 +80,7 @@
2.50
2.51 txt{*\noindent
2.52 The proof is by showing that our relation is a subset of another well-founded
2.53 -relation: one given by a measure function.
2.54 +relation: one given by a measure function.\index{*wf_subset (theorem)}
2.55 *};
2.56
2.57 apply (rule wf_subset [of "measure (\<lambda>k::nat. N-k)"], blast);
3.1 --- a/doc-src/TutorialI/Advanced/advanced.tex Thu Aug 09 10:17:45 2001 +0200
3.2 +++ b/doc-src/TutorialI/Advanced/advanced.tex Thu Aug 09 18:12:15 2001 +0200
3.3 @@ -12,15 +12,15 @@
3.4 \section{Advanced Forms of Recursion}
3.5 \index{recdef@\isacommand {recdef} (command)|(}
3.6
3.7 -The purpose of this section is to introduce advanced forms of
3.8 +This section introduces advanced forms of
3.9 \isacommand{recdef}: how to establish termination by means other than measure
3.10 -functions, how to define recursive function over nested recursive datatypes,
3.11 +functions, how to define recursive functions over nested recursive datatypes
3.12 and how to deal with partial functions.
3.13
3.14 If, after reading this section, you feel that the definition of recursive
3.15 functions is overly complicated by the requirement of
3.16 -totality, you should ponder the alternative, a logic of partial functions,
3.17 -where recursive definitions are always wellformed. For a start, there are many
3.18 +totality, you should ponder the alternatives. In a logic of partial functions,
3.19 +recursive definitions are always accepted. But there are many
3.20 such logics, and no clear winner has emerged. And in all of these logics you
3.21 are (more or less frequently) required to reason about the definedness of
3.22 terms explicitly. Thus one shifts definedness arguments from definition time to
3.23 @@ -38,7 +38,7 @@
3.24 \input{Recdef/document/Nested2.tex}
3.25
3.26 \subsection{Partial Functions}
3.27 -\index{partial function}
3.28 +\index{functions!partial}
3.29 \input{Advanced/document/Partial.tex}
3.30
3.31 \index{recdef@\isacommand {recdef} (command)|)}
4.1 --- a/doc-src/TutorialI/Advanced/document/Partial.tex Thu Aug 09 10:17:45 2001 +0200
4.2 +++ b/doc-src/TutorialI/Advanced/document/Partial.tex Thu Aug 09 18:12:15 2001 +0200
4.3 @@ -3,8 +3,8 @@
4.4 \def\isabellecontext{Partial}%
4.5 %
4.6 \begin{isamarkuptext}%
4.7 -\noindent Throughout the tutorial we have emphasized the fact
4.8 -that all functions in HOL are total. Hence we cannot hope to define
4.9 +\noindent Throughout this tutorial, we have emphasized
4.10 +that all functions in HOL are total. We cannot hope to define
4.11 truly partial functions, but must make them total. A straightforward
4.12 method is to lift the result type of the function from $\tau$ to
4.13 $\tau$~\isa{option} (see \ref{sec:option}), where \isa{None} is
4.14 @@ -47,6 +47,7 @@
4.15 }
4.16 %
4.17 \begin{isamarkuptext}%
4.18 +\index{recursion!guarded}%
4.19 Neither \isacommand{primrec} nor \isacommand{recdef} allow to
4.20 prefix an equation with a condition in the way ordinary definitions do
4.21 (see \isa{minus} above). Instead we have to move the condition over
4.22 @@ -76,7 +77,7 @@
4.23 As a more substantial example we consider the problem of searching a graph.
4.24 For simplicity our graph is given by a function \isa{f} of
4.25 type \isa{{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a} which
4.26 -maps each node to its successor, i.e.\ the graph is really a tree.
4.27 +maps each node to its successor; the graph is really a tree.
4.28 The task is to find the end of a chain, modelled by a node pointing to
4.29 itself. Here is a first attempt:
4.30 \begin{isabelle}%
4.31 @@ -126,19 +127,19 @@
4.32 argument stays unchanged and the second one descends along \isa{step{\isadigit{1}}\ f}. The proof requires unfolding the definition of \isa{step{\isadigit{1}}},
4.33 as specified in the \isacommand{hints} above.
4.34
4.35 -Normally you will then derive the following conditional variant of and from
4.36 -the recursion equation%
4.37 +Normally you will then derive the following conditional variant from
4.38 +the recursion equation:%
4.39 \end{isamarkuptext}%
4.40 \isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\isanewline
4.41 \ \ {\isachardoublequote}wf{\isacharparenleft}step{\isadigit{1}}\ f{\isacharparenright}\ {\isasymLongrightarrow}\ find{\isacharparenleft}f{\isacharcomma}x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}if\ f\ x\ {\isacharequal}\ x\ then\ x\ else\ find{\isacharparenleft}f{\isacharcomma}\ f\ x{\isacharparenright}{\isacharparenright}{\isachardoublequote}\isanewline
4.42 \isacommand{by}\ simp%
4.43 \begin{isamarkuptext}%
4.44 -\noindent and then disable the original recursion equation:%
4.45 +\noindent Then you should disable the original recursion equation:%
4.46 \end{isamarkuptext}%
4.47 \isacommand{declare}\ find{\isachardot}simps{\isacharbrackleft}simp\ del{\isacharbrackright}%
4.48 \begin{isamarkuptext}%
4.49 -We can reason about such underdefined functions just like about any other
4.50 -recursive function. Here is a simple example of recursion induction:%
4.51 +Reasoning about such underdefined functions is like that for other
4.52 +recursive functions. Here is a simple example of recursion induction:%
4.53 \end{isamarkuptext}%
4.54 \isacommand{lemma}\ {\isachardoublequote}wf{\isacharparenleft}step{\isadigit{1}}\ f{\isacharparenright}\ {\isasymlongrightarrow}\ f{\isacharparenleft}find{\isacharparenleft}f{\isacharcomma}x{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ find{\isacharparenleft}f{\isacharcomma}x{\isacharparenright}{\isachardoublequote}\isanewline
4.55 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ f\ x\ rule{\isacharcolon}find{\isachardot}induct{\isacharparenright}\isanewline
4.56 @@ -162,8 +163,8 @@
4.57 \begin{verbatim}
4.58 x := s; while b(x) do x := c(x); return x
4.59 \end{verbatim}
4.60 -In general, \isa{s} will be a tuple (better still: a record). As an example
4.61 -consider the following definition of function \isa{find} above:%
4.62 +In general, \isa{s} will be a tuple or record. As an example
4.63 +consider the following definition of function \isa{find}:%
4.64 \end{isamarkuptext}%
4.65 \isacommand{constdefs}\ find{\isadigit{2}}\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a{\isacharparenright}\ {\isasymRightarrow}\ {\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a{\isachardoublequote}\isanewline
4.66 \ \ {\isachardoublequote}find{\isadigit{2}}\ f\ x\ {\isasymequiv}\isanewline
4.67 @@ -178,7 +179,7 @@
4.68 Although the definition of tail recursive functions via \isa{while} avoids
4.69 termination proofs, there is no free lunch. When proving properties of
4.70 functions defined by \isa{while}, termination rears its ugly head
4.71 -again. Here is \isa{while{\isacharunderscore}rule}, the well known proof rule for total
4.72 +again. Here is \tdx{while_rule}, the well known proof rule for total
4.73 correctness of loops expressed with \isa{while}:
4.74 \begin{isabelle}%
4.75 \ \ \ \ \ {\isasymlbrakk}P\ s{\isacharsemicolon}\ {\isasymAnd}s{\isachardot}\ {\isasymlbrakk}P\ s{\isacharsemicolon}\ b\ s{\isasymrbrakk}\ {\isasymLongrightarrow}\ P\ {\isacharparenleft}c\ s{\isacharparenright}{\isacharsemicolon}\isanewline
4.76 @@ -214,7 +215,7 @@
4.77 \begin{isamarkuptext}%
4.78 Let us conclude this section on partial functions by a
4.79 discussion of the merits of the \isa{while} combinator. We have
4.80 -already seen that the advantage (if it is one) of not having to
4.81 +already seen that the advantage of not having to
4.82 provide a termination argument when defining a function via \isa{while} merely puts off the evil hour. On top of that, tail recursive
4.83 functions tend to be more complicated to reason about. So why use
4.84 \isa{while} at all? The only reason is executability: the recursion
5.1 --- a/doc-src/TutorialI/Advanced/document/WFrec.tex Thu Aug 09 10:17:45 2001 +0200
5.2 +++ b/doc-src/TutorialI/Advanced/document/WFrec.tex Thu Aug 09 18:12:15 2001 +0200
5.3 @@ -5,7 +5,7 @@
5.4 \begin{isamarkuptext}%
5.5 \noindent
5.6 So far, all recursive definitions were shown to terminate via measure
5.7 -functions. Sometimes this can be quite inconvenient or even
5.8 +functions. Sometimes this can be inconvenient or
5.9 impossible. Fortunately, \isacommand{recdef} supports much more
5.10 general definitions. For example, termination of Ackermann's function
5.11 can be shown by means of the \rmindex{lexicographic product} \isa{{\isacharless}{\isacharasterisk}lex{\isacharasterisk}{\isachargreater}}:%
5.12 @@ -25,18 +25,19 @@
5.13 In general, \isacommand{recdef} supports termination proofs based on
5.14 arbitrary well-founded relations as introduced in \S\ref{sec:Well-founded}.
5.15 This is called \textbf{well-founded
5.16 -recursion}\indexbold{recursion!well-founded}. Clearly, a function definition
5.17 -is total iff the set of all pairs $(r,l)$, where $l$ is the argument on the
5.18 +recursion}\indexbold{recursion!well-founded}. A function definition
5.19 +is total if and only if the set of
5.20 +all pairs $(r,l)$, where $l$ is the argument on the
5.21 left-hand side of an equation and $r$ the argument of some recursive call on
5.22 the corresponding right-hand side, induces a well-founded relation. For a
5.23 systematic account of termination proofs via well-founded relations see, for
5.24 example, Baader and Nipkow~\cite{Baader-Nipkow}.
5.25
5.26 -Each \isacommand{recdef} definition should be accompanied (after the name of
5.27 -the function) by a well-founded relation on the argument type of the
5.28 -function. Isabelle/HOL formalizes some of the most important
5.29 +Each \isacommand{recdef} definition should be accompanied (after the function's
5.30 +name) by a well-founded relation on the function's argument type.
5.31 +Isabelle/HOL formalizes some of the most important
5.32 constructions of well-founded relations (see \S\ref{sec:Well-founded}). For
5.33 -example, \isa{measure\ f} is always well-founded, and the lexicographic
5.34 +example, \isa{measure\ f} is always well-founded. The lexicographic
5.35 product of two well-founded relations is again well-founded, which we relied
5.36 on when defining Ackermann's function above.
5.37 Of course the lexicographic product can also be iterated:%
5.38 @@ -54,8 +55,8 @@
5.39 existing well-founded relation via the inverse image construction \isa{inv{\isacharunderscore}image}. All these constructions are known to \isacommand{recdef}. Thus you
5.40 will never have to prove well-foundedness of any relation composed
5.41 solely of these building blocks. But of course the proof of
5.42 -termination of your function definition, i.e.\ that the arguments
5.43 -decrease with every recursive call, may still require you to provide
5.44 +termination of your function definition --- that the arguments
5.45 +decrease with every recursive call --- may still require you to provide
5.46 additional lemmas.
5.47
5.48 It is also possible to use your own well-founded relations with
5.49 @@ -76,7 +77,7 @@
5.50 \begin{isamarkuptxt}%
5.51 \noindent
5.52 The proof is by showing that our relation is a subset of another well-founded
5.53 -relation: one given by a measure function.%
5.54 +relation: one given by a measure function.\index{*wf_subset (theorem)}%
5.55 \end{isamarkuptxt}%
5.56 \isacommand{apply}\ {\isacharparenleft}rule\ wf{\isacharunderscore}subset\ {\isacharbrackleft}of\ {\isachardoublequote}measure\ {\isacharparenleft}{\isasymlambda}k{\isacharcolon}{\isacharcolon}nat{\isachardot}\ N{\isacharminus}k{\isacharparenright}{\isachardoublequote}{\isacharbrackright}{\isacharcomma}\ blast{\isacharparenright}%
5.57 \begin{isamarkuptxt}%
6.1 --- a/doc-src/TutorialI/Advanced/document/simp.tex Thu Aug 09 10:17:45 2001 +0200
6.2 +++ b/doc-src/TutorialI/Advanced/document/simp.tex Thu Aug 09 18:12:15 2001 +0200
6.3 @@ -7,10 +7,9 @@
6.4 %
6.5 \begin{isamarkuptext}%
6.6 \label{sec:simplification-II}\index{simplification|(}
6.7 -This section discusses some additional nifty features not covered so far and
6.8 -gives a short introduction to the simplification process itself. The latter
6.9 -is helpful to understand why a particular rule does or does not apply in some
6.10 -situation.%
6.11 +This section describes features not covered until now. It also
6.12 +outlines the simplification process itself, which can be helpful
6.13 +when the simplifier does not do what you expect of it.%
6.14 \end{isamarkuptext}%
6.15 %
6.16 \isamarkupsubsection{Advanced Features%
6.17 @@ -21,9 +20,10 @@
6.18 %
6.19 \begin{isamarkuptext}%
6.20 \label{sec:simp-cong}
6.21 -It is hardwired into the simplifier that while simplifying the conclusion $Q$
6.22 -of $P \Imp Q$ it is legal to make uses of the assumption $P$. This
6.23 -kind of contextual information can also be made available for other
6.24 +While simplifying the conclusion $Q$
6.25 +of $P \Imp Q$, it is legal use the assumption $P$.
6.26 +For $\Imp$ this policy is hardwired, but
6.27 +contextual information can also be made available for other
6.28 operators. For example, \isa{xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymlongrightarrow}\ xs\ {\isacharat}\ xs\ {\isacharequal}\ xs} simplifies to \isa{True} because we may use \isa{xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}} when simplifying \isa{xs\ {\isacharat}\ xs\ {\isacharequal}\ xs}. The generation of contextual information during simplification is
6.29 controlled by so-called \bfindex{congruence rules}. This is the one for
6.30 \isa{{\isasymlongrightarrow}}:
6.31 @@ -41,14 +41,14 @@
6.32 \ \ \ \ \ {\isasymlbrakk}A\ {\isacharequal}\ B{\isacharsemicolon}\ {\isasymAnd}x{\isachardot}\ x\ {\isasymin}\ B\ {\isasymLongrightarrow}\ P\ x\ {\isacharequal}\ Q\ x{\isasymrbrakk}\isanewline
6.33 \isaindent{\ \ \ \ \ }{\isasymLongrightarrow}\ {\isacharparenleft}{\isasymforall}x{\isasymin}A{\isachardot}\ P\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}{\isasymforall}x{\isasymin}B{\isachardot}\ Q\ x{\isacharparenright}%
6.34 \end{isabelle}
6.35 -The congruence rule for conditional expressions supplies contextual
6.36 +One congruence rule for conditional expressions supplies contextual
6.37 information for simplifying the \isa{then} and \isa{else} cases:
6.38 \begin{isabelle}%
6.39 \ \ \ \ \ {\isasymlbrakk}b\ {\isacharequal}\ c{\isacharsemicolon}\ c\ {\isasymLongrightarrow}\ x\ {\isacharequal}\ u{\isacharsemicolon}\ {\isasymnot}\ c\ {\isasymLongrightarrow}\ y\ {\isacharequal}\ v{\isasymrbrakk}\isanewline
6.40 \isaindent{\ \ \ \ \ }{\isasymLongrightarrow}\ {\isacharparenleft}if\ b\ then\ x\ else\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}if\ c\ then\ u\ else\ v{\isacharparenright}%
6.41 \end{isabelle}
6.42 -A congruence rule can also \emph{prevent} simplification of some arguments.
6.43 -Here is an alternative congruence rule for conditional expressions:
6.44 +An alternative congruence rule for conditional expressions
6.45 +actually \emph{prevents} simplification of some arguments:
6.46 \begin{isabelle}%
6.47 \ \ \ \ \ b\ {\isacharequal}\ c\ {\isasymLongrightarrow}\ {\isacharparenleft}if\ b\ then\ x\ else\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}if\ c\ then\ x\ else\ y{\isacharparenright}%
6.48 \end{isabelle}
6.49 @@ -83,11 +83,8 @@
6.50 }
6.51 %
6.52 \begin{isamarkuptext}%
6.53 -\index{rewrite rule!permutative|bold}
6.54 -\index{rewriting!ordered|bold}
6.55 -\index{ordered rewriting|bold}
6.56 -\index{simplification!ordered|bold}
6.57 -An equation is a \bfindex{permutative rewrite rule} if the left-hand
6.58 +\index{rewrite rules!permutative|bold}%
6.59 +An equation is a \textbf{permutative rewrite rule} if the left-hand
6.60 side and right-hand side are the same up to renaming of variables. The most
6.61 common permutative rule is commutativity: \isa{x\ {\isacharplus}\ y\ {\isacharequal}\ y\ {\isacharplus}\ x}. Other examples
6.62 include \isa{x\ {\isacharminus}\ y\ {\isacharminus}\ z\ {\isacharequal}\ x\ {\isacharminus}\ z\ {\isacharminus}\ y} in arithmetic and \isa{insert\ x\ {\isacharparenleft}insert\ y\ A{\isacharparenright}\ {\isacharequal}\ insert\ y\ {\isacharparenleft}insert\ x\ A{\isacharparenright}} for sets. Such rules are problematic because
6.63 @@ -125,14 +122,15 @@
6.64 such tricks.%
6.65 \end{isamarkuptext}%
6.66 %
6.67 -\isamarkupsubsection{How it Works%
6.68 +\isamarkupsubsection{How the Simplifier Works%
6.69 }
6.70 %
6.71 \begin{isamarkuptext}%
6.72 \label{sec:SimpHow}
6.73 -Roughly speaking, the simplifier proceeds bottom-up (subterms are simplified
6.74 -first) and a conditional equation is only applied if its condition could be
6.75 -proved (again by simplification). Below we explain some special features of the rewriting process.%
6.76 +Roughly speaking, the simplifier proceeds bottom-up: subterms are simplified
6.77 +first. A conditional equation is only applied if its condition can be
6.78 +proved, again by simplification. Below we explain some special features of
6.79 +the rewriting process.%
6.80 \end{isamarkuptext}%
6.81 %
6.82 \isamarkupsubsubsection{Higher-Order Patterns%
6.83 @@ -141,32 +139,34 @@
6.84 \begin{isamarkuptext}%
6.85 \index{simplification rule|(}
6.86 So far we have pretended the simplifier can deal with arbitrary
6.87 -rewrite rules. This is not quite true. Due to efficiency (and
6.88 -potentially also computability) reasons, the simplifier expects the
6.89 +rewrite rules. This is not quite true. For reasons of feasibility,
6.90 +the simplifier expects the
6.91 left-hand side of each rule to be a so-called \emph{higher-order
6.92 -pattern}~\cite{nipkow-patterns}\indexbold{higher-order
6.93 -pattern}\indexbold{pattern, higher-order}. This restricts where
6.94 +pattern}~\cite{nipkow-patterns}\indexbold{patterns!higher-order}.
6.95 +This restricts where
6.96 unknowns may occur. Higher-order patterns are terms in $\beta$-normal
6.97 -form (this will always be the case unless you have done something
6.98 -strange) where each occurrence of an unknown is of the form
6.99 +form. (This means there are no subterms of the form $(\lambda x. M)(N)$.)
6.100 +Each occurrence of an unknown is of the form
6.101 $\Var{f}~x@1~\dots~x@n$, where the $x@i$ are distinct bound
6.102 variables. Thus all ordinary rewrite rules, where all unknowns are
6.103 -of base type, for example \isa{{\isacharquery}m\ {\isacharplus}\ {\isacharquery}n\ {\isacharplus}\ {\isacharquery}k\ {\isacharequal}\ {\isacharquery}m\ {\isacharplus}\ {\isacharparenleft}{\isacharquery}n\ {\isacharplus}\ {\isacharquery}k{\isacharparenright}}, are OK: if an unknown is
6.104 +of base type, for example \isa{{\isacharquery}m\ {\isacharplus}\ {\isacharquery}n\ {\isacharplus}\ {\isacharquery}k\ {\isacharequal}\ {\isacharquery}m\ {\isacharplus}\ {\isacharparenleft}{\isacharquery}n\ {\isacharplus}\ {\isacharquery}k{\isacharparenright}}, are acceptable: if an unknown is
6.105 of base type, it cannot have any arguments. Additionally, the rule
6.106 -\isa{{\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}P\ x\ {\isasymand}\ {\isacharquery}Q\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}{\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}P\ x{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}Q\ x{\isacharparenright}{\isacharparenright}} is also OK, in
6.107 +\isa{{\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}P\ x\ {\isasymand}\ {\isacharquery}Q\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}{\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}P\ x{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}Q\ x{\isacharparenright}{\isacharparenright}} is also acceptable, in
6.108 both directions: all arguments of the unknowns \isa{{\isacharquery}P} and
6.109 \isa{{\isacharquery}Q} are distinct bound variables.
6.110
6.111 -If the left-hand side is not a higher-order pattern, not all is lost
6.112 -and the simplifier will still try to apply the rule, but only if it
6.113 -matches \emph{directly}, i.e.\ without much $\lambda$-calculus hocus
6.114 -pocus. For example, \isa{{\isacharquery}f\ {\isacharquery}x\ {\isasymin}\ range\ {\isacharquery}f\ {\isacharequal}\ True} rewrites
6.115 +If the left-hand side is not a higher-order pattern, all is not lost.
6.116 +The simplifier will still try to apply the rule provided it
6.117 +matches directly: without much $\lambda$-calculus hocus
6.118 +pocus. For example, \isa{{\isacharparenleft}{\isacharquery}f\ {\isacharquery}x\ {\isasymin}\ range\ {\isacharquery}f{\isacharparenright}\ {\isacharequal}\ True} rewrites
6.119 \isa{g\ a\ {\isasymin}\ range\ g} to \isa{True}, but will fail to match
6.120 \isa{g{\isacharparenleft}h\ b{\isacharparenright}\ {\isasymin}\ range{\isacharparenleft}{\isasymlambda}x{\isachardot}\ g{\isacharparenleft}h\ x{\isacharparenright}{\isacharparenright}}. However, you can
6.121 -replace the offending subterms (in our case \isa{{\isacharquery}f\ {\isacharquery}x}, which
6.122 -is not a pattern) by adding new variables and conditions: \isa{{\isacharquery}y\ {\isacharequal}\ {\isacharquery}f\ {\isacharquery}x\ {\isasymLongrightarrow}\ {\isacharquery}y\ {\isasymin}\ range\ {\isacharquery}f\ {\isacharequal}\ True} is fine
6.123 +eliminate the offending subterms --- those that are not patterns ---
6.124 +by adding new variables and conditions.
6.125 +In our example, we eliminate \isa{{\isacharquery}f\ {\isacharquery}x} and obtain
6.126 + \isa{{\isacharquery}y\ {\isacharequal}\ {\isacharquery}f\ {\isacharquery}x\ {\isasymLongrightarrow}\ {\isacharparenleft}{\isacharquery}y\ {\isasymin}\ range\ {\isacharquery}f{\isacharparenright}\ {\isacharequal}\ True}, which is fine
6.127 as a conditional rewrite rule since conditions can be arbitrary
6.128 -terms. However, this trick is not a panacea because the newly
6.129 +terms. However, this trick is not a panacea because the newly
6.130 introduced conditions may be hard to solve.
6.131
6.132 There is no restriction on the form of the right-hand
7.1 --- a/doc-src/TutorialI/Advanced/simp.thy Thu Aug 09 10:17:45 2001 +0200
7.2 +++ b/doc-src/TutorialI/Advanced/simp.thy Thu Aug 09 18:12:15 2001 +0200
7.3 @@ -5,10 +5,9 @@
7.4 section{*Simplification*}
7.5
7.6 text{*\label{sec:simplification-II}\index{simplification|(}
7.7 -This section discusses some additional nifty features not covered so far and
7.8 -gives a short introduction to the simplification process itself. The latter
7.9 -is helpful to understand why a particular rule does or does not apply in some
7.10 -situation.
7.11 +This section describes features not covered until now. It also
7.12 +outlines the simplification process itself, which can be helpful
7.13 +when the simplifier does not do what you expect of it.
7.14 *}
7.15
7.16 subsection{*Advanced Features*}
7.17 @@ -16,9 +15,10 @@
7.18 subsubsection{*Congruence Rules*}
7.19
7.20 text{*\label{sec:simp-cong}
7.21 -It is hardwired into the simplifier that while simplifying the conclusion $Q$
7.22 -of $P \Imp Q$ it is legal to make uses of the assumption $P$. This
7.23 -kind of contextual information can also be made available for other
7.24 +While simplifying the conclusion $Q$
7.25 +of $P \Imp Q$, it is legal use the assumption $P$.
7.26 +For $\Imp$ this policy is hardwired, but
7.27 +contextual information can also be made available for other
7.28 operators. For example, @{prop"xs = [] --> xs@xs = xs"} simplifies to @{term
7.29 True} because we may use @{prop"xs = []"} when simplifying @{prop"xs@xs =
7.30 xs"}. The generation of contextual information during simplification is
7.31 @@ -33,11 +33,11 @@
7.32 Here are some more examples. The congruence rules for bounded
7.33 quantifiers supply contextual information about the bound variable:
7.34 @{thm[display,eta_contract=false,margin=60]ball_cong[no_vars]}
7.35 -The congruence rule for conditional expressions supplies contextual
7.36 +One congruence rule for conditional expressions supplies contextual
7.37 information for simplifying the @{text then} and @{text else} cases:
7.38 @{thm[display]if_cong[no_vars]}
7.39 -A congruence rule can also \emph{prevent} simplification of some arguments.
7.40 -Here is an alternative congruence rule for conditional expressions:
7.41 +An alternative congruence rule for conditional expressions
7.42 +actually \emph{prevents} simplification of some arguments:
7.43 @{thm[display]if_weak_cong[no_vars]}
7.44 Only the first argument is simplified; the others remain unchanged.
7.45 This makes simplification much faster and is faithful to the evaluation
7.46 @@ -67,11 +67,8 @@
7.47 subsubsection{*Permutative Rewrite Rules*}
7.48
7.49 text{*
7.50 -\index{rewrite rule!permutative|bold}
7.51 -\index{rewriting!ordered|bold}
7.52 -\index{ordered rewriting|bold}
7.53 -\index{simplification!ordered|bold}
7.54 -An equation is a \bfindex{permutative rewrite rule} if the left-hand
7.55 +\index{rewrite rules!permutative|bold}%
7.56 +An equation is a \textbf{permutative rewrite rule} if the left-hand
7.57 side and right-hand side are the same up to renaming of variables. The most
7.58 common permutative rule is commutativity: @{prop"x+y = y+x"}. Other examples
7.59 include @{prop"(x-y)-z = (x-z)-y"} in arithmetic and @{prop"insert x (insert
7.60 @@ -110,45 +107,48 @@
7.61 such tricks.
7.62 *}
7.63
7.64 -subsection{*How it Works*}
7.65 +subsection{*How the Simplifier Works*}
7.66
7.67 text{*\label{sec:SimpHow}
7.68 -Roughly speaking, the simplifier proceeds bottom-up (subterms are simplified
7.69 -first) and a conditional equation is only applied if its condition could be
7.70 -proved (again by simplification). Below we explain some special features of the rewriting process.
7.71 +Roughly speaking, the simplifier proceeds bottom-up: subterms are simplified
7.72 +first. A conditional equation is only applied if its condition can be
7.73 +proved, again by simplification. Below we explain some special features of
7.74 +the rewriting process.
7.75 *}
7.76
7.77 subsubsection{*Higher-Order Patterns*}
7.78
7.79 text{*\index{simplification rule|(}
7.80 So far we have pretended the simplifier can deal with arbitrary
7.81 -rewrite rules. This is not quite true. Due to efficiency (and
7.82 -potentially also computability) reasons, the simplifier expects the
7.83 +rewrite rules. This is not quite true. For reasons of feasibility,
7.84 +the simplifier expects the
7.85 left-hand side of each rule to be a so-called \emph{higher-order
7.86 -pattern}~\cite{nipkow-patterns}\indexbold{higher-order
7.87 -pattern}\indexbold{pattern, higher-order}. This restricts where
7.88 +pattern}~\cite{nipkow-patterns}\indexbold{patterns!higher-order}.
7.89 +This restricts where
7.90 unknowns may occur. Higher-order patterns are terms in $\beta$-normal
7.91 -form (this will always be the case unless you have done something
7.92 -strange) where each occurrence of an unknown is of the form
7.93 +form. (This means there are no subterms of the form $(\lambda x. M)(N)$.)
7.94 +Each occurrence of an unknown is of the form
7.95 $\Var{f}~x@1~\dots~x@n$, where the $x@i$ are distinct bound
7.96 variables. Thus all ordinary rewrite rules, where all unknowns are
7.97 -of base type, for example @{thm add_assoc}, are OK: if an unknown is
7.98 +of base type, for example @{thm add_assoc}, are acceptable: if an unknown is
7.99 of base type, it cannot have any arguments. Additionally, the rule
7.100 -@{text"(\<forall>x. ?P x \<and> ?Q x) = ((\<forall>x. ?P x) \<and> (\<forall>x. ?Q x))"} is also OK, in
7.101 +@{text"(\<forall>x. ?P x \<and> ?Q x) = ((\<forall>x. ?P x) \<and> (\<forall>x. ?Q x))"} is also acceptable, in
7.102 both directions: all arguments of the unknowns @{text"?P"} and
7.103 @{text"?Q"} are distinct bound variables.
7.104
7.105 -If the left-hand side is not a higher-order pattern, not all is lost
7.106 -and the simplifier will still try to apply the rule, but only if it
7.107 -matches \emph{directly}, i.e.\ without much $\lambda$-calculus hocus
7.108 -pocus. For example, @{text"?f ?x \<in> range ?f = True"} rewrites
7.109 +If the left-hand side is not a higher-order pattern, all is not lost.
7.110 +The simplifier will still try to apply the rule provided it
7.111 +matches directly: without much $\lambda$-calculus hocus
7.112 +pocus. For example, @{text"(?f ?x \<in> range ?f) = True"} rewrites
7.113 @{term"g a \<in> range g"} to @{term True}, but will fail to match
7.114 @{text"g(h b) \<in> range(\<lambda>x. g(h x))"}. However, you can
7.115 -replace the offending subterms (in our case @{text"?f ?x"}, which
7.116 -is not a pattern) by adding new variables and conditions: @{text"?y =
7.117 -?f ?x \<Longrightarrow> ?y \<in> range ?f = True"} is fine
7.118 +eliminate the offending subterms --- those that are not patterns ---
7.119 +by adding new variables and conditions.
7.120 +In our example, we eliminate @{text"?f ?x"} and obtain
7.121 + @{text"?y =
7.122 +?f ?x \<Longrightarrow> (?y \<in> range ?f) = True"}, which is fine
7.123 as a conditional rewrite rule since conditions can be arbitrary
7.124 -terms. However, this trick is not a panacea because the newly
7.125 +terms. However, this trick is not a panacea because the newly
7.126 introduced conditions may be hard to solve.
7.127
7.128 There is no restriction on the form of the right-hand
8.1 --- a/doc-src/TutorialI/CTL/CTL.thy Thu Aug 09 10:17:45 2001 +0200
8.2 +++ b/doc-src/TutorialI/CTL/CTL.thy Thu Aug 09 18:12:15 2001 +0200
8.3 @@ -3,6 +3,7 @@
8.4 subsection{*Computation Tree Logic --- CTL*};
8.5
8.6 text{*\label{sec:CTL}
8.7 +\index{CTL|(}%
8.8 The semantics of PDL only needs reflexive transitive closure.
8.9 Let us be adventurous and introduce a more expressive temporal operator.
8.10 We extend the datatype
8.11 @@ -27,7 +28,7 @@
8.12 "Paths s \<equiv> {p. s = p 0 \<and> (\<forall>i. (p i, p(i+1)) \<in> M)}";
8.13
8.14 text{*\noindent
8.15 -This definition allows a very succinct statement of the semantics of @{term AF}:
8.16 +This definition allows a succinct statement of the semantics of @{term AF}:
8.17 \footnote{Do not be misled: neither datatypes nor recursive functions can be
8.18 extended by new constructors or equations. This is just a trick of the
8.19 presentation. In reality one has to define a new datatype and a new function.}
8.20 @@ -108,9 +109,9 @@
8.21 "lfp(af A) \<subseteq> {s. \<forall> p \<in> Paths s. \<exists> i. p i \<in> A}";
8.22
8.23 txt{*\noindent
8.24 -In contrast to the analogous property for @{term EF}, and just
8.25 -for a change, we do not use fixed point induction but a weaker theorem,
8.26 -also known in the literature (after David Park) as \emph{Park-induction}:
8.27 +In contrast to the analogous proof for @{term EF}, and just
8.28 +for a change, we do not use fixed point induction. Park-induction,
8.29 +named after David Park, is weaker but sufficient for this proof:
8.30 \begin{center}
8.31 @{thm lfp_lowerbound[of _ "S",no_vars]} \hfill (@{thm[source]lfp_lowerbound})
8.32 \end{center}
8.33 @@ -137,20 +138,19 @@
8.34
8.35 txt{*
8.36 @{subgoals[display,indent=0,margin=70,goals_limit=1]}
8.37 -It merely remains to set @{term pa} to @{term"\<lambda>i. p(i+1)"}, i.e.\ @{term p} without its
8.38 -first element. The rest is practically automatic:
8.39 +It merely remains to set @{term pa} to @{term"\<lambda>i. p(i+1)"}, that is,
8.40 +@{term p} without its first element. The rest is automatic:
8.41 *};
8.42
8.43 apply(erule_tac x = "\<lambda>i. p(i+1)" in allE);
8.44 -apply simp;
8.45 -apply blast;
8.46 +apply force;
8.47 done;
8.48
8.49
8.50 text{*
8.51 The opposite inclusion is proved by contradiction: if some state
8.52 @{term s} is not in @{term"lfp(af A)"}, then we can construct an
8.53 -infinite @{term A}-avoiding path starting from @{term s}. The reason is
8.54 +infinite @{term A}-avoiding path starting from~@{term s}. The reason is
8.55 that by unfolding @{term lfp} we find that if @{term s} is not in
8.56 @{term"lfp(af A)"}, then @{term s} is not in @{term A} and there is a
8.57 direct successor of @{term s} that is again not in \mbox{@{term"lfp(af
8.58 @@ -219,9 +219,9 @@
8.59 apply(clarsimp);
8.60
8.61 txt{*\noindent
8.62 -After simplification and clarification, the subgoal has the following form
8.63 +After simplification and clarification, the subgoal has the following form:
8.64 @{subgoals[display,indent=0,margin=70,goals_limit=1]}
8.65 -and invites a proof by induction on @{term i}:
8.66 +It invites a proof by induction on @{term i}:
8.67 *};
8.68
8.69 apply(induct_tac i);
8.70 @@ -244,7 +244,7 @@
8.71 apply(fast intro:someI2_ex);
8.72
8.73 txt{*\noindent
8.74 -What is worth noting here is that we have used @{text fast} rather than
8.75 +What is worth noting here is that we have used \methdx{fast} rather than
8.76 @{text blast}. The reason is that @{text blast} would fail because it cannot
8.77 cope with @{thm[source]someI2_ex}: unifying its conclusion with the current
8.78 subgoal is non-trivial because of the nested schematic variables. For
8.79 @@ -349,8 +349,8 @@
8.80
8.81 The language defined above is not quite CTL\@. The latter also includes an
8.82 until-operator @{term"EU f g"} with semantics ``there \emph{E}xists a path
8.83 -where @{term f} is true \emph{U}ntil @{term g} becomes true''. With the help
8.84 -of an auxiliary function
8.85 +where @{term f} is true \emph{U}ntil @{term g} becomes true''. We need
8.86 +an auxiliary function:
8.87 *}
8.88
8.89 consts until:: "state set \<Rightarrow> state set \<Rightarrow> state \<Rightarrow> state list \<Rightarrow> bool"
8.90 @@ -362,7 +362,7 @@
8.91 "eusem A B \<equiv> {s. \<exists>p. until A B s p}"(*>*)
8.92
8.93 text{*\noindent
8.94 -the semantics of @{term EU} is straightforward:
8.95 +Expressing the semantics of @{term EU} is now straightforward:
8.96 @{prop[display]"s \<Turnstile> EU f g = (\<exists>p. until {t. t \<Turnstile> f} {t. t \<Turnstile> g} s p)"}
8.97 Note that @{term EU} is not definable in terms of the other operators!
8.98
8.99 @@ -465,6 +465,7 @@
8.100 in the case of finite sets and a monotone function~@{term F},
8.101 the value of \mbox{@{term"lfp F"}} can be computed by iterated application of @{term F} to~@{term"{}"} until
8.102 a fixed point is reached. It is actually possible to generate executable functional programs
8.103 -from HOL definitions, but that is beyond the scope of the tutorial.
8.104 +from HOL definitions, but that is beyond the scope of the tutorial.%
8.105 +\index{CTL|)}
8.106 *}
8.107 (*<*)end(*>*)
9.1 --- a/doc-src/TutorialI/CTL/CTLind.thy Thu Aug 09 10:17:45 2001 +0200
9.2 +++ b/doc-src/TutorialI/CTL/CTLind.thy Thu Aug 09 18:12:15 2001 +0200
9.3 @@ -3,13 +3,14 @@
9.4 subsection{*CTL Revisited*}
9.5
9.6 text{*\label{sec:CTL-revisited}
9.7 -The purpose of this section is twofold: we want to demonstrate
9.8 -some of the induction principles and heuristics discussed above and we want to
9.9 +\index{CTL|(}%
9.10 +The purpose of this section is twofold: to demonstrate
9.11 +some of the induction principles and heuristics discussed above and to
9.12 show how inductive definitions can simplify proofs.
9.13 In \S\ref{sec:CTL} we gave a fairly involved proof of the correctness of a
9.14 model checker for CTL\@. In particular the proof of the
9.15 @{thm[source]infinity_lemma} on the way to @{thm[source]AF_lemma2} is not as
9.16 -simple as one might intuitively expect, due to the @{text SOME} operator
9.17 +simple as one might expect, due to the @{text SOME} operator
9.18 involved. Below we give a simpler proof of @{thm[source]AF_lemma2}
9.19 based on an auxiliary inductive definition.
9.20
9.21 @@ -50,7 +51,7 @@
9.22 done
9.23
9.24 text{*\noindent
9.25 -The base case (@{prop"t = s"}) is trivial (@{text blast}).
9.26 +The base case (@{prop"t = s"}) is trivial and proved by @{text blast}.
9.27 In the induction step, we have an infinite @{term A}-avoiding path @{term f}
9.28 starting from @{term u}, a successor of @{term t}. Now we simply instantiate
9.29 the @{text"\<forall>f\<in>Paths t"} in the induction hypothesis by the path starting with
9.30 @@ -61,7 +62,7 @@
9.31 Now we come to the key lemma. Assuming that no infinite @{term A}-avoiding
9.32 path starts from @{term s}, we want to show @{prop"s \<in> lfp(af A)"}. For the
9.33 inductive proof this must be generalized to the statement that every point @{term t}
9.34 -``between'' @{term s} and @{term A}, i.e.\ all of @{term"Avoid s A"},
9.35 +``between'' @{term s} and @{term A}, in other words all of @{term"Avoid s A"},
9.36 is contained in @{term"lfp(af A)"}:
9.37 *}
9.38
9.39 @@ -77,7 +78,7 @@
9.40 again trivial.
9.41
9.42 The formal counterpart of this proof sketch is a well-founded induction
9.43 -w.r.t.\ @{term M} restricted to (roughly speaking) @{term"Avoid s A - A"}:
9.44 +on~@{term M} restricted to @{term"Avoid s A - A"}, roughly speaking:
9.45 @{term[display]"{(y,x). (x,y) \<in> M \<and> x \<in> Avoid s A \<and> x \<notin> A}"}
9.46 As we shall see presently, the absence of infinite @{term A}-avoiding paths
9.47 starting from @{term s} implies well-foundedness of this relation. For the
9.48 @@ -95,11 +96,11 @@
9.49 then all successors of @{term t} that are in @{term"Avoid s A"} are in
9.50 @{term"lfp (af A)"}. Unfolding @{term lfp} in the conclusion of the first
9.51 subgoal once, we have to prove that @{term t} is in @{term A} or all successors
9.52 -of @{term t} are in @{term"lfp (af A)"}: if @{term t} is not in @{term A},
9.53 +of @{term t} are in @{term"lfp (af A)"}. But if @{term t} is not in @{term A},
9.54 the second
9.55 @{term Avoid}-rule implies that all successors of @{term t} are in
9.56 -@{term"Avoid s A"} (because we also assume @{prop"t \<in> Avoid s A"}), and
9.57 -hence, by the induction hypothesis, all successors of @{term t} are indeed in
9.58 +@{term"Avoid s A"}, because we also assume @{prop"t \<in> Avoid s A"}.
9.59 +Hence, by the induction hypothesis, all successors of @{term t} are indeed in
9.60 @{term"lfp(af A)"}. Mechanically:
9.61 *}
9.62
9.63 @@ -108,8 +109,8 @@
9.64 apply(blast intro:Avoid.intros);
9.65
9.66 txt{*
9.67 -Having proved the main goal we return to the proof obligation that the above
9.68 -relation is indeed well-founded. This is proved by contradiction: if
9.69 +Having proved the main goal, we return to the proof obligation that the
9.70 +relation used above is indeed well-founded. This is proved by contradiction: if
9.71 the relation is not well-founded then there exists an infinite @{term
9.72 A}-avoiding path all in @{term"Avoid s A"}, by theorem
9.73 @{thm[source]wf_iff_no_infinite_down_chain}:
9.74 @@ -128,14 +129,15 @@
9.75 text{*
9.76 The @{text"(no_asm)"} modifier of the @{text"rule_format"} directive in the
9.77 statement of the lemma means
9.78 -that the assumption is left unchanged --- otherwise the @{text"\<forall>p"} is turned
9.79 +that the assumption is left unchanged; otherwise the @{text"\<forall>p"}
9.80 +would be turned
9.81 into a @{text"\<And>p"}, which would complicate matters below. As it is,
9.82 @{thm[source]Avoid_in_lfp} is now
9.83 @{thm[display]Avoid_in_lfp[no_vars]}
9.84 The main theorem is simply the corollary where @{prop"t = s"},
9.85 -in which case the assumption @{prop"t \<in> Avoid s A"} is trivially true
9.86 -by the first @{term Avoid}-rule. Isabelle confirms this:
9.87 -*}
9.88 +when the assumption @{prop"t \<in> Avoid s A"} is trivially true
9.89 +by the first @{term Avoid}-rule. Isabelle confirms this:%
9.90 +\index{CTL|)}*}
9.91
9.92 theorem AF_lemma2: "{s. \<forall>p \<in> Paths s. \<exists> i. p i \<in> A} \<subseteq> lfp(af A)";
9.93 by(auto elim:Avoid_in_lfp intro:Avoid.intros);
10.1 --- a/doc-src/TutorialI/CTL/document/CTL.tex Thu Aug 09 10:17:45 2001 +0200
10.2 +++ b/doc-src/TutorialI/CTL/document/CTL.tex Thu Aug 09 18:12:15 2001 +0200
10.3 @@ -7,6 +7,7 @@
10.4 %
10.5 \begin{isamarkuptext}%
10.6 \label{sec:CTL}
10.7 +\index{CTL|(}%
10.8 The semantics of PDL only needs reflexive transitive closure.
10.9 Let us be adventurous and introduce a more expressive temporal operator.
10.10 We extend the datatype
10.11 @@ -24,7 +25,7 @@
10.12 \ \ \ \ \ \ \ \ \ {\isachardoublequote}Paths\ s\ {\isasymequiv}\ {\isacharbraceleft}p{\isachardot}\ s\ {\isacharequal}\ p\ {\isadigit{0}}\ {\isasymand}\ {\isacharparenleft}{\isasymforall}i{\isachardot}\ {\isacharparenleft}p\ i{\isacharcomma}\ p{\isacharparenleft}i{\isacharplus}{\isadigit{1}}{\isacharparenright}{\isacharparenright}\ {\isasymin}\ M{\isacharparenright}{\isacharbraceright}{\isachardoublequote}%
10.13 \begin{isamarkuptext}%
10.14 \noindent
10.15 -This definition allows a very succinct statement of the semantics of \isa{AF}:
10.16 +This definition allows a succinct statement of the semantics of \isa{AF}:
10.17 \footnote{Do not be misled: neither datatypes nor recursive functions can be
10.18 extended by new constructors or equations. This is just a trick of the
10.19 presentation. In reality one has to define a new datatype and a new function.}%
10.20 @@ -62,9 +63,9 @@
10.21 \ \ {\isachardoublequote}lfp{\isacharparenleft}af\ A{\isacharparenright}\ {\isasymsubseteq}\ {\isacharbraceleft}s{\isachardot}\ {\isasymforall}\ p\ {\isasymin}\ Paths\ s{\isachardot}\ {\isasymexists}\ i{\isachardot}\ p\ i\ {\isasymin}\ A{\isacharbraceright}{\isachardoublequote}%
10.22 \begin{isamarkuptxt}%
10.23 \noindent
10.24 -In contrast to the analogous property for \isa{EF}, and just
10.25 -for a change, we do not use fixed point induction but a weaker theorem,
10.26 -also known in the literature (after David Park) as \emph{Park-induction}:
10.27 +In contrast to the analogous proof for \isa{EF}, and just
10.28 +for a change, we do not use fixed point induction. Park-induction,
10.29 +named after David Park, is weaker but sufficient for this proof:
10.30 \begin{center}
10.31 \isa{f\ S\ {\isasymsubseteq}\ S\ {\isasymLongrightarrow}\ lfp\ f\ {\isasymsubseteq}\ S} \hfill (\isa{lfp{\isacharunderscore}lowerbound})
10.32 \end{center}
10.33 @@ -95,21 +96,20 @@
10.34 \begin{isamarkuptxt}%
10.35 \begin{isabelle}%
10.36 \ {\isadigit{1}}{\isachardot}\ {\isasymAnd}p{\isachardot}\ {\isasymlbrakk}{\isasymforall}i{\isachardot}\ {\isacharparenleft}p\ i{\isacharcomma}\ p\ {\isacharparenleft}Suc\ i{\isacharparenright}{\isacharparenright}\ {\isasymin}\ M{\isacharsemicolon}\isanewline
10.37 -\isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}p{\isachardot}\ \ \ \ }{\isasymforall}pa{\isachardot}\ p\ {\isadigit{1}}\ {\isacharequal}\ pa\ {\isadigit{0}}\ {\isasymand}\ {\isacharparenleft}{\isasymforall}i{\isachardot}\ {\isacharparenleft}pa\ i{\isacharcomma}\ pa\ {\isacharparenleft}Suc\ i{\isacharparenright}{\isacharparenright}\ {\isasymin}\ M{\isacharparenright}\ {\isasymlongrightarrow}\isanewline
10.38 +\isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}p{\isachardot}\ \ \ \ }{\isasymforall}pa{\isachardot}\ p\ {\isadigit{1}}{\isacharprime}\ {\isacharequal}\ pa\ {\isadigit{0}}\ {\isasymand}\ {\isacharparenleft}{\isasymforall}i{\isachardot}\ {\isacharparenleft}pa\ i{\isacharcomma}\ pa\ {\isacharparenleft}Suc\ i{\isacharparenright}{\isacharparenright}\ {\isasymin}\ M{\isacharparenright}\ {\isasymlongrightarrow}\isanewline
10.39 \isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}p{\isachardot}\ \ \ \ {\isasymforall}pa{\isachardot}\ }{\isacharparenleft}{\isasymexists}i{\isachardot}\ pa\ i\ {\isasymin}\ A{\isacharparenright}{\isasymrbrakk}\isanewline
10.40 \isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}p{\isachardot}\ }{\isasymLongrightarrow}\ {\isasymexists}i{\isachardot}\ p\ i\ {\isasymin}\ A%
10.41 \end{isabelle}
10.42 -It merely remains to set \isa{pa} to \isa{{\isasymlambda}i{\isachardot}\ p\ {\isacharparenleft}i\ {\isacharplus}\ {\isadigit{1}}{\isacharparenright}}, i.e.\ \isa{p} without its
10.43 -first element. The rest is practically automatic:%
10.44 +It merely remains to set \isa{pa} to \isa{{\isasymlambda}i{\isachardot}\ p\ {\isacharparenleft}i\ {\isacharplus}\ {\isadigit{1}}{\isacharparenright}}, that is,
10.45 +\isa{p} without its first element. The rest is automatic:%
10.46 \end{isamarkuptxt}%
10.47 \isacommand{apply}{\isacharparenleft}erule{\isacharunderscore}tac\ x\ {\isacharequal}\ {\isachardoublequote}{\isasymlambda}i{\isachardot}\ p{\isacharparenleft}i{\isacharplus}{\isadigit{1}}{\isacharparenright}{\isachardoublequote}\ \isakeyword{in}\ allE{\isacharparenright}\isanewline
10.48 -\isacommand{apply}\ simp\isanewline
10.49 -\isacommand{apply}\ blast\isanewline
10.50 +\isacommand{apply}\ force\isanewline
10.51 \isacommand{done}%
10.52 \begin{isamarkuptext}%
10.53 The opposite inclusion is proved by contradiction: if some state
10.54 \isa{s} is not in \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}, then we can construct an
10.55 -infinite \isa{A}-avoiding path starting from \isa{s}. The reason is
10.56 +infinite \isa{A}-avoiding path starting from~\isa{s}. The reason is
10.57 that by unfolding \isa{lfp} we find that if \isa{s} is not in
10.58 \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}, then \isa{s} is not in \isa{A} and there is a
10.59 direct successor of \isa{s} that is again not in \mbox{\isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}}. Iterating this argument yields the promised infinite
10.60 @@ -171,13 +171,13 @@
10.61 \isacommand{apply}{\isacharparenleft}clarsimp{\isacharparenright}%
10.62 \begin{isamarkuptxt}%
10.63 \noindent
10.64 -After simplification and clarification, the subgoal has the following form
10.65 +After simplification and clarification, the subgoal has the following form:
10.66 \begin{isabelle}%
10.67 \ {\isadigit{1}}{\isachardot}\ {\isasymAnd}i{\isachardot}\ {\isasymlbrakk}Q\ s{\isacharsemicolon}\ {\isasymforall}s{\isachardot}\ Q\ s\ {\isasymlongrightarrow}\ {\isacharparenleft}{\isasymexists}t{\isachardot}\ {\isacharparenleft}s{\isacharcomma}\ t{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\ Q\ t{\isacharparenright}{\isasymrbrakk}\isanewline
10.68 \isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}i{\isachardot}\ }{\isasymLongrightarrow}\ {\isacharparenleft}path\ s\ Q\ i{\isacharcomma}\ SOME\ t{\isachardot}\ {\isacharparenleft}path\ s\ Q\ i{\isacharcomma}\ t{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\ Q\ t{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\isanewline
10.69 \isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}i{\isachardot}\ {\isasymLongrightarrow}\ }Q\ {\isacharparenleft}path\ s\ Q\ i{\isacharparenright}%
10.70 \end{isabelle}
10.71 -and invites a proof by induction on \isa{i}:%
10.72 +It invites a proof by induction on \isa{i}:%
10.73 \end{isamarkuptxt}%
10.74 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ i{\isacharparenright}\isanewline
10.75 \ \isacommand{apply}{\isacharparenleft}simp{\isacharparenright}%
10.76 @@ -203,7 +203,7 @@
10.77 \ \isacommand{apply}{\isacharparenleft}fast\ intro{\isacharcolon}someI{\isadigit{2}}{\isacharunderscore}ex{\isacharparenright}%
10.78 \begin{isamarkuptxt}%
10.79 \noindent
10.80 -What is worth noting here is that we have used \isa{fast} rather than
10.81 +What is worth noting here is that we have used \methdx{fast} rather than
10.82 \isa{blast}. The reason is that \isa{blast} would fail because it cannot
10.83 cope with \isa{someI{\isadigit{2}}{\isacharunderscore}ex}: unifying its conclusion with the current
10.84 subgoal is non-trivial because of the nested schematic variables. For
10.85 @@ -283,8 +283,8 @@
10.86 \begin{isamarkuptext}%
10.87 The language defined above is not quite CTL\@. The latter also includes an
10.88 until-operator \isa{EU\ f\ g} with semantics ``there \emph{E}xists a path
10.89 -where \isa{f} is true \emph{U}ntil \isa{g} becomes true''. With the help
10.90 -of an auxiliary function%
10.91 +where \isa{f} is true \emph{U}ntil \isa{g} becomes true''. We need
10.92 +an auxiliary function:%
10.93 \end{isamarkuptext}%
10.94 \isacommand{consts}\ until{\isacharcolon}{\isacharcolon}\ {\isachardoublequote}state\ set\ {\isasymRightarrow}\ state\ set\ {\isasymRightarrow}\ state\ {\isasymRightarrow}\ state\ list\ {\isasymRightarrow}\ bool{\isachardoublequote}\isanewline
10.95 \isacommand{primrec}\isanewline
10.96 @@ -292,7 +292,7 @@
10.97 {\isachardoublequote}until\ A\ B\ s\ {\isacharparenleft}t{\isacharhash}p{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}s\ {\isasymin}\ A\ {\isasymand}\ {\isacharparenleft}s{\isacharcomma}t{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\ until\ A\ B\ t\ p{\isacharparenright}{\isachardoublequote}%
10.98 \begin{isamarkuptext}%
10.99 \noindent
10.100 -the semantics of \isa{EU} is straightforward:
10.101 +Expressing the semantics of \isa{EU} is now straightforward:
10.102 \begin{isabelle}%
10.103 \ \ \ \ \ s\ {\isasymTurnstile}\ EU\ f\ g\ {\isacharequal}\ {\isacharparenleft}{\isasymexists}p{\isachardot}\ until\ {\isacharbraceleft}t{\isachardot}\ t\ {\isasymTurnstile}\ f{\isacharbraceright}\ {\isacharbraceleft}t{\isachardot}\ t\ {\isasymTurnstile}\ g{\isacharbraceright}\ s\ p{\isacharparenright}%
10.104 \end{isabelle}
10.105 @@ -327,6 +327,7 @@
10.106 the value of \mbox{\isa{lfp\ F}} can be computed by iterated application of \isa{F} to~\isa{{\isacharbraceleft}{\isacharbraceright}} until
10.107 a fixed point is reached. It is actually possible to generate executable functional programs
10.108 from HOL definitions, but that is beyond the scope of the tutorial.%
10.109 +\index{CTL|)}%
10.110 \end{isamarkuptext}%
10.111 \end{isabellebody}%
10.112 %%% Local Variables:
11.1 --- a/doc-src/TutorialI/CTL/document/CTLind.tex Thu Aug 09 10:17:45 2001 +0200
11.2 +++ b/doc-src/TutorialI/CTL/document/CTLind.tex Thu Aug 09 18:12:15 2001 +0200
11.3 @@ -7,13 +7,14 @@
11.4 %
11.5 \begin{isamarkuptext}%
11.6 \label{sec:CTL-revisited}
11.7 -The purpose of this section is twofold: we want to demonstrate
11.8 -some of the induction principles and heuristics discussed above and we want to
11.9 +\index{CTL|(}%
11.10 +The purpose of this section is twofold: to demonstrate
11.11 +some of the induction principles and heuristics discussed above and to
11.12 show how inductive definitions can simplify proofs.
11.13 In \S\ref{sec:CTL} we gave a fairly involved proof of the correctness of a
11.14 model checker for CTL\@. In particular the proof of the
11.15 \isa{infinity{\isacharunderscore}lemma} on the way to \isa{AF{\isacharunderscore}lemma{\isadigit{2}}} is not as
11.16 -simple as one might intuitively expect, due to the \isa{SOME} operator
11.17 +simple as one might expect, due to the \isa{SOME} operator
11.18 involved. Below we give a simpler proof of \isa{AF{\isacharunderscore}lemma{\isadigit{2}}}
11.19 based on an auxiliary inductive definition.
11.20
11.21 @@ -50,7 +51,7 @@
11.22 \isacommand{done}%
11.23 \begin{isamarkuptext}%
11.24 \noindent
11.25 -The base case (\isa{t\ {\isacharequal}\ s}) is trivial (\isa{blast}).
11.26 +The base case (\isa{t\ {\isacharequal}\ s}) is trivial and proved by \isa{blast}.
11.27 In the induction step, we have an infinite \isa{A}-avoiding path \isa{f}
11.28 starting from \isa{u}, a successor of \isa{t}. Now we simply instantiate
11.29 the \isa{{\isasymforall}f{\isasymin}Paths\ t} in the induction hypothesis by the path starting with
11.30 @@ -61,7 +62,7 @@
11.31 Now we come to the key lemma. Assuming that no infinite \isa{A}-avoiding
11.32 path starts from \isa{s}, we want to show \isa{s\ {\isasymin}\ lfp\ {\isacharparenleft}af\ A{\isacharparenright}}. For the
11.33 inductive proof this must be generalized to the statement that every point \isa{t}
11.34 -``between'' \isa{s} and \isa{A}, i.e.\ all of \isa{Avoid\ s\ A},
11.35 +``between'' \isa{s} and \isa{A}, in other words all of \isa{Avoid\ s\ A},
11.36 is contained in \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}:%
11.37 \end{isamarkuptext}%
11.38 \isacommand{lemma}\ Avoid{\isacharunderscore}in{\isacharunderscore}lfp{\isacharbrackleft}rule{\isacharunderscore}format{\isacharparenleft}no{\isacharunderscore}asm{\isacharparenright}{\isacharbrackright}{\isacharcolon}\isanewline
11.39 @@ -75,7 +76,7 @@
11.40 again trivial.
11.41
11.42 The formal counterpart of this proof sketch is a well-founded induction
11.43 -w.r.t.\ \isa{M} restricted to (roughly speaking) \isa{Avoid\ s\ A\ {\isacharminus}\ A}:
11.44 +on~\isa{M} restricted to \isa{Avoid\ s\ A\ {\isacharminus}\ A}, roughly speaking:
11.45 \begin{isabelle}%
11.46 \ \ \ \ \ {\isacharbraceleft}{\isacharparenleft}y{\isacharcomma}\ x{\isacharparenright}{\isachardot}\ {\isacharparenleft}x{\isacharcomma}\ y{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\ x\ {\isasymin}\ Avoid\ s\ A\ {\isasymand}\ x\ {\isasymnotin}\ A{\isacharbraceright}%
11.47 \end{isabelle}
11.48 @@ -100,19 +101,19 @@
11.49 then all successors of \isa{t} that are in \isa{Avoid\ s\ A} are in
11.50 \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}. Unfolding \isa{lfp} in the conclusion of the first
11.51 subgoal once, we have to prove that \isa{t} is in \isa{A} or all successors
11.52 -of \isa{t} are in \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}: if \isa{t} is not in \isa{A},
11.53 +of \isa{t} are in \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}. But if \isa{t} is not in \isa{A},
11.54 the second
11.55 \isa{Avoid}-rule implies that all successors of \isa{t} are in
11.56 -\isa{Avoid\ s\ A} (because we also assume \isa{t\ {\isasymin}\ Avoid\ s\ A}), and
11.57 -hence, by the induction hypothesis, all successors of \isa{t} are indeed in
11.58 +\isa{Avoid\ s\ A}, because we also assume \isa{t\ {\isasymin}\ Avoid\ s\ A}.
11.59 +Hence, by the induction hypothesis, all successors of \isa{t} are indeed in
11.60 \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}. Mechanically:%
11.61 \end{isamarkuptxt}%
11.62 \ \isacommand{apply}{\isacharparenleft}subst\ lfp{\isacharunderscore}unfold{\isacharbrackleft}OF\ mono{\isacharunderscore}af{\isacharbrackright}{\isacharparenright}\isanewline
11.63 \ \isacommand{apply}{\isacharparenleft}simp\ {\isacharparenleft}no{\isacharunderscore}asm{\isacharparenright}\ add{\isacharcolon}\ af{\isacharunderscore}def{\isacharparenright}\isanewline
11.64 \ \isacommand{apply}{\isacharparenleft}blast\ intro{\isacharcolon}Avoid{\isachardot}intros{\isacharparenright}%
11.65 \begin{isamarkuptxt}%
11.66 -Having proved the main goal we return to the proof obligation that the above
11.67 -relation is indeed well-founded. This is proved by contradiction: if
11.68 +Having proved the main goal, we return to the proof obligation that the
11.69 +relation used above is indeed well-founded. This is proved by contradiction: if
11.70 the relation is not well-founded then there exists an infinite \isa{A}-avoiding path all in \isa{Avoid\ s\ A}, by theorem
11.71 \isa{wf{\isacharunderscore}iff{\isacharunderscore}no{\isacharunderscore}infinite{\isacharunderscore}down{\isacharunderscore}chain}:
11.72 \begin{isabelle}%
11.73 @@ -130,15 +131,17 @@
11.74 \begin{isamarkuptext}%
11.75 The \isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharparenright}} modifier of the \isa{rule{\isacharunderscore}format} directive in the
11.76 statement of the lemma means
11.77 -that the assumption is left unchanged --- otherwise the \isa{{\isasymforall}p} is turned
11.78 +that the assumption is left unchanged; otherwise the \isa{{\isasymforall}p}
11.79 +would be turned
11.80 into a \isa{{\isasymAnd}p}, which would complicate matters below. As it is,
11.81 \isa{Avoid{\isacharunderscore}in{\isacharunderscore}lfp} is now
11.82 \begin{isabelle}%
11.83 \ \ \ \ \ {\isasymlbrakk}{\isasymforall}p{\isasymin}Paths\ s{\isachardot}\ {\isasymexists}i{\isachardot}\ p\ i\ {\isasymin}\ A{\isacharsemicolon}\ t\ {\isasymin}\ Avoid\ s\ A{\isasymrbrakk}\ {\isasymLongrightarrow}\ t\ {\isasymin}\ lfp\ {\isacharparenleft}af\ A{\isacharparenright}%
11.84 \end{isabelle}
11.85 The main theorem is simply the corollary where \isa{t\ {\isacharequal}\ s},
11.86 -in which case the assumption \isa{t\ {\isasymin}\ Avoid\ s\ A} is trivially true
11.87 +when the assumption \isa{t\ {\isasymin}\ Avoid\ s\ A} is trivially true
11.88 by the first \isa{Avoid}-rule. Isabelle confirms this:%
11.89 +\index{CTL|)}%
11.90 \end{isamarkuptext}%
11.91 \isacommand{theorem}\ AF{\isacharunderscore}lemma{\isadigit{2}}{\isacharcolon}\ \ {\isachardoublequote}{\isacharbraceleft}s{\isachardot}\ {\isasymforall}p\ {\isasymin}\ Paths\ s{\isachardot}\ {\isasymexists}\ i{\isachardot}\ p\ i\ {\isasymin}\ A{\isacharbraceright}\ {\isasymsubseteq}\ lfp{\isacharparenleft}af\ A{\isacharparenright}{\isachardoublequote}\isanewline
11.92 \isacommand{by}{\isacharparenleft}auto\ elim{\isacharcolon}Avoid{\isacharunderscore}in{\isacharunderscore}lfp\ intro{\isacharcolon}Avoid{\isachardot}intros{\isacharparenright}\isanewline
12.1 --- a/doc-src/TutorialI/Inductive/AB.thy Thu Aug 09 10:17:45 2001 +0200
12.2 +++ b/doc-src/TutorialI/Inductive/AB.thy Thu Aug 09 18:12:15 2001 +0200
12.3 @@ -3,6 +3,7 @@
12.4 section{*Case Study: A Context Free Grammar*}
12.5
12.6 text{*\label{sec:CFG}
12.7 +\index{grammars!defining inductively|(}%
12.8 Grammars are nothing but shorthands for inductive definitions of nonterminals
12.9 which represent sets of strings. For example, the production
12.10 $A \to B c$ is short for
12.11 @@ -136,8 +137,7 @@
12.12 by(force simp add:zabs_def take_Cons split:nat.split if_splits);
12.13
12.14 text{*
12.15 -Finally we come to the above mentioned lemma about cutting in half a word with two
12.16 -more elements of one sort than of the other sort:
12.17 +Finally we come to the above-mentioned lemma about cutting in half a word with two more elements of one sort than of the other sort:
12.18 *}
12.19
12.20 lemma part1:
12.21 @@ -212,7 +212,7 @@
12.22 holds for all words shorter than @{term w}.
12.23
12.24 The proof continues with a case distinction on @{term w},
12.25 -i.e.\ if @{term w} is empty or not.
12.26 +on whether @{term w} is empty or not.
12.27 *}
12.28
12.29 apply(case_tac w);
12.30 @@ -285,7 +285,9 @@
12.31
12.32 text{*
12.33 We conclude this section with a comparison of our proof with
12.34 -Hopcroft and Ullman's \cite[p.\ts81]{HopcroftUllman}. For a start, the textbook
12.35 +Hopcroft\index{Hopcroft, J. E.} and Ullman's\index{Ullman, J. D.}
12.36 +\cite[p.\ts81]{HopcroftUllman}.
12.37 +For a start, the textbook
12.38 grammar, for no good reason, excludes the empty word, thus complicating
12.39 matters just a little bit: they have 8 instead of our 7 productions.
12.40
12.41 @@ -302,7 +304,9 @@
12.42 thus is not at all similar to the other cases (which are automatic in
12.43 Isabelle). The authors are at least cavalier about this point and may
12.44 even have overlooked the slight difficulty lurking in the omitted
12.45 -cases. This is not atypical for pen-and-paper proofs, once analysed in
12.46 -detail. *}
12.47 +cases. Such errors are found in many pen-and-paper proofs when they
12.48 +are scrutinized formally.%
12.49 +\index{grammars!defining inductively|)}
12.50 +*}
12.51
12.52 (*<*)end(*>*)
13.1 --- a/doc-src/TutorialI/Inductive/Mutual.thy Thu Aug 09 10:17:45 2001 +0200
13.2 +++ b/doc-src/TutorialI/Inductive/Mutual.thy Thu Aug 09 18:12:15 2001 +0200
13.3 @@ -25,7 +25,7 @@
13.4 (simply concatenate the names of the sets involved) and has the conclusion
13.5 @{text[display]"(?x \<in> even \<longrightarrow> ?P ?x) \<and> (?y \<in> odd \<longrightarrow> ?Q ?y)"}
13.6
13.7 -If we want to prove that all even numbers are divisible by 2, we have to
13.8 +If we want to prove that all even numbers are divisible by two, we have to
13.9 generalize the statement as follows:
13.10 *}
13.11
14.1 --- a/doc-src/TutorialI/Inductive/Star.thy Thu Aug 09 10:17:45 2001 +0200
14.2 +++ b/doc-src/TutorialI/Inductive/Star.thy Thu Aug 09 18:12:15 2001 +0200
14.3 @@ -3,6 +3,7 @@
14.4 section{*The Reflexive Transitive Closure*}
14.5
14.6 text{*\label{sec:rtc}
14.7 +\index{reflexive transitive closure!defining inductively|(}%
14.8 An inductive definition may accept parameters, so it can express
14.9 functions that yield sets.
14.10 Relations too can be defined inductively, since they are just sets of pairs.
14.11 @@ -21,7 +22,7 @@
14.12
14.13 text{*\noindent
14.14 The function @{term rtc} is annotated with concrete syntax: instead of
14.15 -@{text"rtc r"} we can read and write @{term"r*"}. The actual definition
14.16 +@{text"rtc r"} we can write @{term"r*"}. The actual definition
14.17 consists of two rules. Reflexivity is obvious and is immediately given the
14.18 @{text iff} attribute to increase automation. The
14.19 second rule, @{thm[source]rtc_step}, says that we can always add one more
14.20 @@ -65,9 +66,9 @@
14.21 apply(erule rtc.induct)
14.22
14.23 txt{*\noindent
14.24 -Unfortunately, even the resulting base case is a problem
14.25 +Unfortunately, even the base case is a problem:
14.26 @{subgoals[display,indent=0,goals_limit=1]}
14.27 -and maybe not what you had expected. We have to abandon this proof attempt.
14.28 +We have to abandon this proof attempt.
14.29 To understand what is going on, let us look again at @{thm[source]rtc.induct}.
14.30 In the above application of @{text erule}, the first premise of
14.31 @{thm[source]rtc.induct} is unified with the first suitable assumption, which
14.32 @@ -150,6 +151,7 @@
14.33 @{thm[source]rtc2.induct}. Since inductive proofs are hard enough
14.34 anyway, we should always pick the simplest induction schema available.
14.35 Hence @{term rtc} is the definition of choice.
14.36 +\index{reflexive transitive closure!defining inductively|)}
14.37
14.38 \begin{exercise}\label{ex:converse-rtc-step}
14.39 Show that the converse of @{thm[source]rtc_step} also holds:
15.1 --- a/doc-src/TutorialI/Inductive/document/AB.tex Thu Aug 09 10:17:45 2001 +0200
15.2 +++ b/doc-src/TutorialI/Inductive/document/AB.tex Thu Aug 09 18:12:15 2001 +0200
15.3 @@ -7,6 +7,7 @@
15.4 %
15.5 \begin{isamarkuptext}%
15.6 \label{sec:CFG}
15.7 +\index{grammars!defining inductively|(}%
15.8 Grammars are nothing but shorthands for inductive definitions of nonterminals
15.9 which represent sets of strings. For example, the production
15.10 $A \to B c$ is short for
15.11 @@ -129,8 +130,7 @@
15.12 \ \isacommand{apply}{\isacharparenleft}simp{\isacharparenright}\isanewline
15.13 \isacommand{by}{\isacharparenleft}force\ simp\ add{\isacharcolon}zabs{\isacharunderscore}def\ take{\isacharunderscore}Cons\ split{\isacharcolon}nat{\isachardot}split\ if{\isacharunderscore}splits{\isacharparenright}%
15.14 \begin{isamarkuptext}%
15.15 -Finally we come to the above mentioned lemma about cutting in half a word with two
15.16 -more elements of one sort than of the other sort:%
15.17 +Finally we come to the above-mentioned lemma about cutting in half a word with two more elements of one sort than of the other sort:%
15.18 \end{isamarkuptext}%
15.19 \isacommand{lemma}\ part{\isadigit{1}}{\isacharcolon}\isanewline
15.20 \ {\isachardoublequote}size{\isacharbrackleft}x{\isasymin}w{\isachardot}\ P\ x{\isacharbrackright}\ {\isacharequal}\ size{\isacharbrackleft}x{\isasymin}w{\isachardot}\ {\isasymnot}P\ x{\isacharbrackright}{\isacharplus}{\isadigit{2}}\ {\isasymLongrightarrow}\isanewline
15.21 @@ -199,7 +199,7 @@
15.22 holds for all words shorter than \isa{w}.
15.23
15.24 The proof continues with a case distinction on \isa{w},
15.25 -i.e.\ if \isa{w} is empty or not.%
15.26 +on whether \isa{w} is empty or not.%
15.27 \end{isamarkuptxt}%
15.28 \isacommand{apply}{\isacharparenleft}case{\isacharunderscore}tac\ w{\isacharparenright}\isanewline
15.29 \ \isacommand{apply}{\isacharparenleft}simp{\isacharunderscore}all{\isacharparenright}%
15.30 @@ -266,7 +266,9 @@
15.31 \isacommand{by}{\isacharparenleft}force\ simp\ add{\isacharcolon}min{\isacharunderscore}less{\isacharunderscore}iff{\isacharunderscore}disj\ split\ add{\isacharcolon}\ nat{\isacharunderscore}diff{\isacharunderscore}split{\isacharparenright}%
15.32 \begin{isamarkuptext}%
15.33 We conclude this section with a comparison of our proof with
15.34 -Hopcroft and Ullman's \cite[p.\ts81]{HopcroftUllman}. For a start, the textbook
15.35 +Hopcroft\index{Hopcroft, J. E.} and Ullman's\index{Ullman, J. D.}
15.36 +\cite[p.\ts81]{HopcroftUllman}.
15.37 +For a start, the textbook
15.38 grammar, for no good reason, excludes the empty word, thus complicating
15.39 matters just a little bit: they have 8 instead of our 7 productions.
15.40
15.41 @@ -283,8 +285,9 @@
15.42 thus is not at all similar to the other cases (which are automatic in
15.43 Isabelle). The authors are at least cavalier about this point and may
15.44 even have overlooked the slight difficulty lurking in the omitted
15.45 -cases. This is not atypical for pen-and-paper proofs, once analysed in
15.46 -detail.%
15.47 +cases. Such errors are found in many pen-and-paper proofs when they
15.48 +are scrutinized formally.%
15.49 +\index{grammars!defining inductively|)}%
15.50 \end{isamarkuptext}%
15.51 \end{isabellebody}%
15.52 %%% Local Variables:
16.1 --- a/doc-src/TutorialI/Inductive/document/Mutual.tex Thu Aug 09 10:17:45 2001 +0200
16.2 +++ b/doc-src/TutorialI/Inductive/document/Mutual.tex Thu Aug 09 18:12:15 2001 +0200
16.3 @@ -29,7 +29,7 @@
16.4 \ \ \ \ \ {\isacharparenleft}{\isacharquery}x\ {\isasymin}\ even\ {\isasymlongrightarrow}\ {\isacharquery}P\ {\isacharquery}x{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}{\isacharquery}y\ {\isasymin}\ odd\ {\isasymlongrightarrow}\ {\isacharquery}Q\ {\isacharquery}y{\isacharparenright}%
16.5 \end{isabelle}
16.6
16.7 -If we want to prove that all even numbers are divisible by 2, we have to
16.8 +If we want to prove that all even numbers are divisible by two, we have to
16.9 generalize the statement as follows:%
16.10 \end{isamarkuptext}%
16.11 \isacommand{lemma}\ {\isachardoublequote}{\isacharparenleft}m\ {\isasymin}\ even\ {\isasymlongrightarrow}\ {\isadigit{2}}\ dvd\ m{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}n\ {\isasymin}\ odd\ {\isasymlongrightarrow}\ {\isadigit{2}}\ dvd\ {\isacharparenleft}Suc\ n{\isacharparenright}{\isacharparenright}{\isachardoublequote}%
17.1 --- a/doc-src/TutorialI/Inductive/document/Star.tex Thu Aug 09 10:17:45 2001 +0200
17.2 +++ b/doc-src/TutorialI/Inductive/document/Star.tex Thu Aug 09 18:12:15 2001 +0200
17.3 @@ -7,6 +7,7 @@
17.4 %
17.5 \begin{isamarkuptext}%
17.6 \label{sec:rtc}
17.7 +\index{reflexive transitive closure!defining inductively|(}%
17.8 An inductive definition may accept parameters, so it can express
17.9 functions that yield sets.
17.10 Relations too can be defined inductively, since they are just sets of pairs.
17.11 @@ -24,7 +25,7 @@
17.12 \begin{isamarkuptext}%
17.13 \noindent
17.14 The function \isa{rtc} is annotated with concrete syntax: instead of
17.15 -\isa{rtc\ r} we can read and write \isa{r{\isacharasterisk}}. The actual definition
17.16 +\isa{rtc\ r} we can write \isa{r{\isacharasterisk}}. The actual definition
17.17 consists of two rules. Reflexivity is obvious and is immediately given the
17.18 \isa{iff} attribute to increase automation. The
17.19 second rule, \isa{rtc{\isacharunderscore}step}, says that we can always add one more
17.20 @@ -68,11 +69,11 @@
17.21 \isacommand{apply}{\isacharparenleft}erule\ rtc{\isachardot}induct{\isacharparenright}%
17.22 \begin{isamarkuptxt}%
17.23 \noindent
17.24 -Unfortunately, even the resulting base case is a problem
17.25 +Unfortunately, even the base case is a problem:
17.26 \begin{isabelle}%
17.27 \ {\isadigit{1}}{\isachardot}\ {\isasymAnd}x{\isachardot}\ {\isacharparenleft}y{\isacharcomma}\ z{\isacharparenright}\ {\isasymin}\ r{\isacharasterisk}\ {\isasymLongrightarrow}\ {\isacharparenleft}x{\isacharcomma}\ z{\isacharparenright}\ {\isasymin}\ r{\isacharasterisk}%
17.28 \end{isabelle}
17.29 -and maybe not what you had expected. We have to abandon this proof attempt.
17.30 +We have to abandon this proof attempt.
17.31 To understand what is going on, let us look again at \isa{rtc{\isachardot}induct}.
17.32 In the above application of \isa{erule}, the first premise of
17.33 \isa{rtc{\isachardot}induct} is unified with the first suitable assumption, which
17.34 @@ -154,6 +155,7 @@
17.35 \isa{rtc{\isadigit{2}}{\isachardot}induct}. Since inductive proofs are hard enough
17.36 anyway, we should always pick the simplest induction schema available.
17.37 Hence \isa{rtc} is the definition of choice.
17.38 +\index{reflexive transitive closure!defining inductively|)}
17.39
17.40 \begin{exercise}\label{ex:converse-rtc-step}
17.41 Show that the converse of \isa{rtc{\isacharunderscore}step} also holds:
18.1 --- a/doc-src/TutorialI/Inductive/even-example.tex Thu Aug 09 10:17:45 2001 +0200
18.2 +++ b/doc-src/TutorialI/Inductive/even-example.tex Thu Aug 09 18:12:15 2001 +0200
18.3 @@ -27,7 +27,7 @@
18.4 above states that 0 is even; the second states that if $n$ is even, then so
18.5 is~$n+2$. Given this declaration, Isabelle generates a fixed point
18.6 definition for \isa{even} and proves theorems about it,
18.7 -thus following the definitional approach (see \S\ref{sec:definitional}).
18.8 +thus following the definitional approach (see {\S}\ref{sec:definitional}).
18.9 These theorems
18.10 include the introduction rules specified in the declaration, an elimination
18.11 rule for case analysis and an induction rule. We can refer to these
18.12 @@ -300,9 +300,10 @@
18.13 dispensing with the lemma \isa{even_imp_even_minus_2}.
18.14 This example also justifies the terminology
18.15 \textbf{rule inversion}: the new rule inverts the introduction rule
18.16 -\isa{even.step}.
18.17 +\isa{even.step}. In general, a rule can be inverted when the set of elements
18.18 +it introduces is disjoint from those of the other introduction rules.
18.19
18.20 -For one-off applications of rule inversion, use the \isa{ind_cases} method.
18.21 +For one-off applications of rule inversion, use the \methdx{ind_cases} method.
18.22 Here is an example:
18.23 \begin{isabelle}
18.24 \isacommand{apply}\ (ind_cases\ "Suc(Suc\ n)\ \isasymin \ even")
19.1 --- a/doc-src/TutorialI/Misc/AdvancedInd.thy Thu Aug 09 10:17:45 2001 +0200
19.2 +++ b/doc-src/TutorialI/Misc/AdvancedInd.thy Thu Aug 09 18:12:15 2001 +0200
19.3 @@ -4,7 +4,7 @@
19.4
19.5 text{*\noindent
19.6 Now that we have learned about rules and logic, we take another look at the
19.7 -finer points of induction. The two questions we answer are: what to do if the
19.8 +finer points of induction. We consider two questions: what to do if the
19.9 proposition to be proved is not directly amenable to induction
19.10 (\S\ref{sec:ind-var-in-prems}), and how to utilize (\S\ref{sec:complete-ind})
19.11 and even derive (\S\ref{sec:derive-ind}) new induction schemas. We conclude
19.12 @@ -14,7 +14,7 @@
19.13 subsection{*Massaging the Proposition*};
19.14
19.15 text{*\label{sec:ind-var-in-prems}
19.16 -Often we have assumed that the theorem we want to prove is already in a form
19.17 +Often we have assumed that the theorem to be proved is already in a form
19.18 that is amenable to induction, but sometimes it isn't.
19.19 Here is an example.
19.20 Since @{term"hd"} and @{term"last"} return the first and last element of a
19.21 @@ -31,7 +31,7 @@
19.22 \end{quote}
19.23 and leads to the base case
19.24 @{subgoals[display,indent=0,goals_limit=1]}
19.25 -After simplification, it becomes
19.26 +Simplification reduces the base case to this:
19.27 \begin{isabelle}
19.28 \ 1.\ xs\ {\isasymnoteq}\ []\ {\isasymLongrightarrow}\ hd\ []\ =\ last\ []
19.29 \end{isabelle}
19.30 @@ -41,7 +41,7 @@
19.31 We should not have ignored the warning. Because the induction
19.32 formula is only the conclusion, induction does not affect the occurrence of @{term xs} in the premises.
19.33 Thus the case that should have been trivial
19.34 -becomes unprovable. Fortunately, the solution is easy:\footnote{A very similar
19.35 +becomes unprovable. Fortunately, the solution is easy:\footnote{A similar
19.36 heuristic applies to rule inductions; see \S\ref{sec:rtc}.}
19.37 \begin{quote}
19.38 \emph{Pull all occurrences of the induction variable into the conclusion
19.39 @@ -49,7 +49,7 @@
19.40 \end{quote}
19.41 Thus we should state the lemma as an ordinary
19.42 implication~(@{text"\<longrightarrow>"}), letting
19.43 -@{text rule_format} (\S\ref{sec:forward}) convert the
19.44 +\attrdx{rule_format} (\S\ref{sec:forward}) convert the
19.45 result to the usual @{text"\<Longrightarrow>"} form:
19.46 *};
19.47 (*<*)oops;(*>*)
19.48 @@ -65,7 +65,7 @@
19.49
19.50 If there are multiple premises $A@1$, \dots, $A@n$ containing the
19.51 induction variable, you should turn the conclusion $C$ into
19.52 -\[ A@1 \longrightarrow \cdots A@n \longrightarrow C \]
19.53 +\[ A@1 \longrightarrow \cdots A@n \longrightarrow C. \]
19.54 Additionally, you may also have to universally quantify some other variables,
19.55 which can yield a fairly complex conclusion. However, @{text rule_format}
19.56 can remove any number of occurrences of @{text"\<forall>"} and
19.57 @@ -82,15 +82,17 @@
19.58 *)
19.59
19.60 text{*
19.61 +\index{induction!on a term}%
19.62 A second reason why your proposition may not be amenable to induction is that
19.63 -you want to induct on a whole term, rather than an individual variable. In
19.64 -general, when inducting on some term $t$ you must rephrase the conclusion $C$
19.65 +you want to induct on a complex term, rather than a variable. In
19.66 +general, induction on a term~$t$ requires rephrasing the conclusion~$C$
19.67 as
19.68 \begin{equation}\label{eqn:ind-over-term}
19.69 -\forall y@1 \dots y@n.~ x = t \longrightarrow C
19.70 +\forall y@1 \dots y@n.~ x = t \longrightarrow C.
19.71 \end{equation}
19.72 -where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is new, and
19.73 -perform induction on $x$ afterwards. An example appears in
19.74 +where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is a new variable.
19.75 +Now you can perform
19.76 +perform induction on~$x$. An example appears in
19.77 \S\ref{sec:complete-ind} below.
19.78
19.79 The very same problem may occur in connection with rule induction. Remember
19.80 @@ -98,7 +100,7 @@
19.81 some inductively defined set and the $x@i$ are variables. If instead we have
19.82 a premise $t \in R$, where $t$ is not just an $n$-tuple of variables, we
19.83 replace it with $(x@1,\dots,x@k) \in R$, and rephrase the conclusion $C$ as
19.84 -\[ \forall y@1 \dots y@n.~ (x@1,\dots,x@k) = t \longrightarrow C \]
19.85 +\[ \forall y@1 \dots y@n.~ (x@1,\dots,x@k) = t \longrightarrow C. \]
19.86 For an example see \S\ref{sec:CTL-revisited} below.
19.87
19.88 Of course, all premises that share free variables with $t$ need to be pulled into
19.89 @@ -132,11 +134,12 @@
19.90 be helpful. We show how to apply such induction schemas by an example.
19.91
19.92 Structural induction on @{typ"nat"} is
19.93 -usually known as mathematical induction. There is also \emph{complete}
19.94 -induction, where you must prove $P(n)$ under the assumption that $P(m)$
19.95 -holds for all $m<n$. In Isabelle, this is the theorem @{thm[source]nat_less_induct}:
19.96 +usually known as mathematical induction. There is also \textbf{complete}
19.97 +\index{induction!complete}%
19.98 +induction, where you prove $P(n)$ under the assumption that $P(m)$
19.99 +holds for all $m<n$. In Isabelle, this is the theorem \tdx{nat_less_induct}:
19.100 @{thm[display]"nat_less_induct"[no_vars]}
19.101 -As an example of its application we prove a property of the following
19.102 +As an application, we prove a property of the following
19.103 function:
19.104 *};
19.105
19.106 @@ -169,7 +172,7 @@
19.107 apply(induct_tac k rule: nat_less_induct);
19.108
19.109 txt{*\noindent
19.110 -which leaves us with the following proof state:
19.111 +We get the following proof state:
19.112 @{subgoals[display,indent=0,margin=65]}
19.113 After stripping the @{text"\<forall>i"}, the proof continues with a case
19.114 distinction on @{term"i"}. The case @{prop"i = 0"} is trivial and we focus on
19.115 @@ -205,13 +208,12 @@
19.116 which, together with (2) yields @{prop"j < f (Suc j)"} (again by
19.117 @{thm[source]le_less_trans}).
19.118
19.119 -This last step shows both the power and the danger of automatic proofs: they
19.120 -will usually not tell you how the proof goes, because it can be very hard to
19.121 -translate the internal proof into a human-readable format. Therefore
19.122 -Chapter~\ref{sec:part2?} introduces a language for writing readable
19.123 -proofs.
19.124 +This last step shows both the power and the danger of automatic proofs. They
19.125 +will usually not tell you how the proof goes, because it can be hard to
19.126 +translate the internal proof into a human-readable format. Automatic
19.127 +proofs are easy to write but hard to read and understand.
19.128
19.129 -We can now derive the desired @{prop"i <= f i"} from @{thm[source]f_incr_lem}:
19.130 +The desired result, @{prop"i <= f i"}, follows from @{thm[source]f_incr_lem}:
19.131 *};
19.132
19.133 lemmas f_incr = f_incr_lem[rule_format, OF refl];
19.134 @@ -254,11 +256,12 @@
19.135 subsection{*Derivation of New Induction Schemas*};
19.136
19.137 text{*\label{sec:derive-ind}
19.138 +\index{induction!deriving new schemas}%
19.139 Induction schemas are ordinary theorems and you can derive new ones
19.140 -whenever you wish. This section shows you how to, using the example
19.141 +whenever you wish. This section shows you how, using the example
19.142 of @{thm[source]nat_less_induct}. Assume we only have structural induction
19.143 -available for @{typ"nat"} and want to derive complete induction. This
19.144 -requires us to generalize the statement first:
19.145 +available for @{typ"nat"} and want to derive complete induction. We
19.146 +must generalize the statement as shown:
19.147 *};
19.148
19.149 lemma induct_lem: "(\<And>n::nat. \<forall>m<n. P m \<Longrightarrow> P n) \<Longrightarrow> \<forall>m<n. P m";
19.150 @@ -289,7 +292,7 @@
19.151 by(insert induct_lem, blast);
19.152
19.153 text{*
19.154 -Finally we should remind the reader that HOL already provides the mother of
19.155 +HOL already provides the mother of
19.156 all inductions, well-founded induction (see \S\ref{sec:Well-founded}). For
19.157 example theorem @{thm[source]nat_less_induct} is
19.158 a special case of @{thm[source]wf_induct} where @{term r} is @{text"<"} on
20.1 --- a/doc-src/TutorialI/Misc/document/AdvancedInd.tex Thu Aug 09 10:17:45 2001 +0200
20.2 +++ b/doc-src/TutorialI/Misc/document/AdvancedInd.tex Thu Aug 09 18:12:15 2001 +0200
20.3 @@ -5,7 +5,7 @@
20.4 \begin{isamarkuptext}%
20.5 \noindent
20.6 Now that we have learned about rules and logic, we take another look at the
20.7 -finer points of induction. The two questions we answer are: what to do if the
20.8 +finer points of induction. We consider two questions: what to do if the
20.9 proposition to be proved is not directly amenable to induction
20.10 (\S\ref{sec:ind-var-in-prems}), and how to utilize (\S\ref{sec:complete-ind})
20.11 and even derive (\S\ref{sec:derive-ind}) new induction schemas. We conclude
20.12 @@ -17,7 +17,7 @@
20.13 %
20.14 \begin{isamarkuptext}%
20.15 \label{sec:ind-var-in-prems}
20.16 -Often we have assumed that the theorem we want to prove is already in a form
20.17 +Often we have assumed that the theorem to be proved is already in a form
20.18 that is amenable to induction, but sometimes it isn't.
20.19 Here is an example.
20.20 Since \isa{hd} and \isa{last} return the first and last element of a
20.21 @@ -35,7 +35,7 @@
20.22 \begin{isabelle}%
20.23 \ {\isadigit{1}}{\isachardot}\ xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymLongrightarrow}\ hd\ {\isacharparenleft}rev\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}\ {\isacharequal}\ last\ {\isacharbrackleft}{\isacharbrackright}%
20.24 \end{isabelle}
20.25 -After simplification, it becomes
20.26 +Simplification reduces the base case to this:
20.27 \begin{isabelle}
20.28 \ 1.\ xs\ {\isasymnoteq}\ []\ {\isasymLongrightarrow}\ hd\ []\ =\ last\ []
20.29 \end{isabelle}
20.30 @@ -45,7 +45,7 @@
20.31 We should not have ignored the warning. Because the induction
20.32 formula is only the conclusion, induction does not affect the occurrence of \isa{xs} in the premises.
20.33 Thus the case that should have been trivial
20.34 -becomes unprovable. Fortunately, the solution is easy:\footnote{A very similar
20.35 +becomes unprovable. Fortunately, the solution is easy:\footnote{A similar
20.36 heuristic applies to rule inductions; see \S\ref{sec:rtc}.}
20.37 \begin{quote}
20.38 \emph{Pull all occurrences of the induction variable into the conclusion
20.39 @@ -53,7 +53,7 @@
20.40 \end{quote}
20.41 Thus we should state the lemma as an ordinary
20.42 implication~(\isa{{\isasymlongrightarrow}}), letting
20.43 -\isa{rule{\isacharunderscore}format} (\S\ref{sec:forward}) convert the
20.44 +\attrdx{rule_format} (\S\ref{sec:forward}) convert the
20.45 result to the usual \isa{{\isasymLongrightarrow}} form:%
20.46 \end{isamarkuptxt}%
20.47 \isacommand{lemma}\ hd{\isacharunderscore}rev\ {\isacharbrackleft}rule{\isacharunderscore}format{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymlongrightarrow}\ hd{\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ last\ xs{\isachardoublequote}%
20.48 @@ -67,7 +67,7 @@
20.49
20.50 If there are multiple premises $A@1$, \dots, $A@n$ containing the
20.51 induction variable, you should turn the conclusion $C$ into
20.52 -\[ A@1 \longrightarrow \cdots A@n \longrightarrow C \]
20.53 +\[ A@1 \longrightarrow \cdots A@n \longrightarrow C. \]
20.54 Additionally, you may also have to universally quantify some other variables,
20.55 which can yield a fairly complex conclusion. However, \isa{rule{\isacharunderscore}format}
20.56 can remove any number of occurrences of \isa{{\isasymforall}} and
20.57 @@ -75,15 +75,17 @@
20.58 \end{isamarkuptxt}%
20.59 %
20.60 \begin{isamarkuptext}%
20.61 +\index{induction!on a term}%
20.62 A second reason why your proposition may not be amenable to induction is that
20.63 -you want to induct on a whole term, rather than an individual variable. In
20.64 -general, when inducting on some term $t$ you must rephrase the conclusion $C$
20.65 +you want to induct on a complex term, rather than a variable. In
20.66 +general, induction on a term~$t$ requires rephrasing the conclusion~$C$
20.67 as
20.68 \begin{equation}\label{eqn:ind-over-term}
20.69 -\forall y@1 \dots y@n.~ x = t \longrightarrow C
20.70 +\forall y@1 \dots y@n.~ x = t \longrightarrow C.
20.71 \end{equation}
20.72 -where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is new, and
20.73 -perform induction on $x$ afterwards. An example appears in
20.74 +where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is a new variable.
20.75 +Now you can perform
20.76 +perform induction on~$x$. An example appears in
20.77 \S\ref{sec:complete-ind} below.
20.78
20.79 The very same problem may occur in connection with rule induction. Remember
20.80 @@ -91,7 +93,7 @@
20.81 some inductively defined set and the $x@i$ are variables. If instead we have
20.82 a premise $t \in R$, where $t$ is not just an $n$-tuple of variables, we
20.83 replace it with $(x@1,\dots,x@k) \in R$, and rephrase the conclusion $C$ as
20.84 -\[ \forall y@1 \dots y@n.~ (x@1,\dots,x@k) = t \longrightarrow C \]
20.85 +\[ \forall y@1 \dots y@n.~ (x@1,\dots,x@k) = t \longrightarrow C. \]
20.86 For an example see \S\ref{sec:CTL-revisited} below.
20.87
20.88 Of course, all premises that share free variables with $t$ need to be pulled into
20.89 @@ -127,13 +129,14 @@
20.90 be helpful. We show how to apply such induction schemas by an example.
20.91
20.92 Structural induction on \isa{nat} is
20.93 -usually known as mathematical induction. There is also \emph{complete}
20.94 -induction, where you must prove $P(n)$ under the assumption that $P(m)$
20.95 -holds for all $m<n$. In Isabelle, this is the theorem \isa{nat{\isacharunderscore}less{\isacharunderscore}induct}:
20.96 +usually known as mathematical induction. There is also \textbf{complete}
20.97 +\index{induction!complete}%
20.98 +induction, where you prove $P(n)$ under the assumption that $P(m)$
20.99 +holds for all $m<n$. In Isabelle, this is the theorem \tdx{nat_less_induct}:
20.100 \begin{isabelle}%
20.101 \ \ \ \ \ {\isacharparenleft}{\isasymAnd}n{\isachardot}\ {\isasymforall}m{\isachardot}\ m\ {\isacharless}\ n\ {\isasymlongrightarrow}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ P\ n%
20.102 \end{isabelle}
20.103 -As an example of its application we prove a property of the following
20.104 +As an application, we prove a property of the following
20.105 function:%
20.106 \end{isamarkuptext}%
20.107 \isacommand{consts}\ f\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}nat\ {\isasymRightarrow}\ nat{\isachardoublequote}\isanewline
20.108 @@ -162,7 +165,7 @@
20.109 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ k\ rule{\isacharcolon}\ nat{\isacharunderscore}less{\isacharunderscore}induct{\isacharparenright}%
20.110 \begin{isamarkuptxt}%
20.111 \noindent
20.112 -which leaves us with the following proof state:
20.113 +We get the following proof state:
20.114 \begin{isabelle}%
20.115 \ {\isadigit{1}}{\isachardot}\ {\isasymAnd}n{\isachardot}\ {\isasymforall}m{\isachardot}\ m\ {\isacharless}\ n\ {\isasymlongrightarrow}\ {\isacharparenleft}{\isasymforall}i{\isachardot}\ m\ {\isacharequal}\ f\ i\ {\isasymlongrightarrow}\ i\ {\isasymle}\ f\ i{\isacharparenright}\ {\isasymLongrightarrow}\isanewline
20.116 \isaindent{\ {\isadigit{1}}{\isachardot}\ {\isasymAnd}n{\isachardot}\ }{\isasymforall}i{\isachardot}\ n\ {\isacharequal}\ f\ i\ {\isasymlongrightarrow}\ i\ {\isasymle}\ f\ i%
20.117 @@ -204,13 +207,12 @@
20.118 which, together with (2) yields \isa{j\ {\isacharless}\ f\ {\isacharparenleft}Suc\ j{\isacharparenright}} (again by
20.119 \isa{le{\isacharunderscore}less{\isacharunderscore}trans}).
20.120
20.121 -This last step shows both the power and the danger of automatic proofs: they
20.122 -will usually not tell you how the proof goes, because it can be very hard to
20.123 -translate the internal proof into a human-readable format. Therefore
20.124 -Chapter~\ref{sec:part2?} introduces a language for writing readable
20.125 -proofs.
20.126 +This last step shows both the power and the danger of automatic proofs. They
20.127 +will usually not tell you how the proof goes, because it can be hard to
20.128 +translate the internal proof into a human-readable format. Automatic
20.129 +proofs are easy to write but hard to read and understand.
20.130
20.131 -We can now derive the desired \isa{i\ {\isasymle}\ f\ i} from \isa{f{\isacharunderscore}incr{\isacharunderscore}lem}:%
20.132 +The desired result, \isa{i\ {\isasymle}\ f\ i}, follows from \isa{f{\isacharunderscore}incr{\isacharunderscore}lem}:%
20.133 \end{isamarkuptext}%
20.134 \isacommand{lemmas}\ f{\isacharunderscore}incr\ {\isacharequal}\ f{\isacharunderscore}incr{\isacharunderscore}lem{\isacharbrackleft}rule{\isacharunderscore}format{\isacharcomma}\ OF\ refl{\isacharbrackright}%
20.135 \begin{isamarkuptext}%
20.136 @@ -255,11 +257,12 @@
20.137 %
20.138 \begin{isamarkuptext}%
20.139 \label{sec:derive-ind}
20.140 +\index{induction!deriving new schemas}%
20.141 Induction schemas are ordinary theorems and you can derive new ones
20.142 -whenever you wish. This section shows you how to, using the example
20.143 +whenever you wish. This section shows you how, using the example
20.144 of \isa{nat{\isacharunderscore}less{\isacharunderscore}induct}. Assume we only have structural induction
20.145 -available for \isa{nat} and want to derive complete induction. This
20.146 -requires us to generalize the statement first:%
20.147 +available for \isa{nat} and want to derive complete induction. We
20.148 +must generalize the statement as shown:%
20.149 \end{isamarkuptext}%
20.150 \isacommand{lemma}\ induct{\isacharunderscore}lem{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m{\isachardoublequote}\isanewline
20.151 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ n{\isacharparenright}%
20.152 @@ -288,7 +291,7 @@
20.153 \isacommand{theorem}\ nat{\isacharunderscore}less{\isacharunderscore}induct{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ P\ n{\isachardoublequote}\isanewline
20.154 \isacommand{by}{\isacharparenleft}insert\ induct{\isacharunderscore}lem{\isacharcomma}\ blast{\isacharparenright}%
20.155 \begin{isamarkuptext}%
20.156 -Finally we should remind the reader that HOL already provides the mother of
20.157 +HOL already provides the mother of
20.158 all inductions, well-founded induction (see \S\ref{sec:Well-founded}). For
20.159 example theorem \isa{nat{\isacharunderscore}less{\isacharunderscore}induct} is
20.160 a special case of \isa{wf{\isacharunderscore}induct} where \isa{r} is \isa{{\isacharless}} on
21.1 --- a/doc-src/TutorialI/Misc/document/simp.tex Thu Aug 09 10:17:45 2001 +0200
21.2 +++ b/doc-src/TutorialI/Misc/document/simp.tex Thu Aug 09 18:12:15 2001 +0200
21.3 @@ -287,7 +287,7 @@
21.4 \isa{case}-expressions, as it does \isa{if}-expressions,
21.5 because with recursive datatypes it could lead to nontermination.
21.6 Instead, the simplifier has a modifier
21.7 -\isa{split}\index{*split (modified)}
21.8 +\isa{split}\index{*split (modifier)}
21.9 for adding splitting rules explicitly. The
21.10 lemma above can be proved in one step by%
21.11 \end{isamarkuptxt}%
22.1 --- a/doc-src/TutorialI/Misc/simp.thy Thu Aug 09 10:17:45 2001 +0200
22.2 +++ b/doc-src/TutorialI/Misc/simp.thy Thu Aug 09 18:12:15 2001 +0200
22.3 @@ -281,7 +281,7 @@
22.4 @{text"case"}-expressions, as it does @{text"if"}-expressions,
22.5 because with recursive datatypes it could lead to nontermination.
22.6 Instead, the simplifier has a modifier
22.7 -@{text split}\index{*split (modified)}
22.8 +@{text split}\index{*split (modifier)}
22.9 for adding splitting rules explicitly. The
22.10 lemma above can be proved in one step by
22.11 *}
23.1 --- a/doc-src/TutorialI/Recdef/Nested0.thy Thu Aug 09 10:17:45 2001 +0200
23.2 +++ b/doc-src/TutorialI/Recdef/Nested0.thy Thu Aug 09 18:12:15 2001 +0200
23.3 @@ -3,6 +3,7 @@
23.4 (*>*)
23.5
23.6 text{*
23.7 +\index{datatypes!nested}%
23.8 In \S\ref{sec:nested-datatype} we defined the datatype of terms
23.9 *}
23.10
24.1 --- a/doc-src/TutorialI/Recdef/Nested1.thy Thu Aug 09 10:17:45 2001 +0200
24.2 +++ b/doc-src/TutorialI/Recdef/Nested1.thy Thu Aug 09 18:12:15 2001 +0200
24.3 @@ -25,15 +25,15 @@
24.4 where @{term set} returns the set of elements of a list
24.5 and @{text"term_list_size :: term list \<Rightarrow> nat"} is an auxiliary
24.6 function automatically defined by Isabelle
24.7 -(while processing the declaration of @{text term}). First we have to understand why the
24.8 -recursive call of @{term trev} underneath @{term map} leads to the above
24.9 -condition. The reason is that \isacommand{recdef} ``knows'' that @{term map}
24.10 +(while processing the declaration of @{text term}). Why does the
24.11 +recursive call of @{term trev} lead to this
24.12 +condition? Because \isacommand{recdef} knows that @{term map}
24.13 will apply @{term trev} only to elements of @{term ts}. Thus the
24.14 condition expresses that the size of the argument @{prop"t : set ts"} of any
24.15 recursive call of @{term trev} is strictly less than @{term"size(App f ts)"},
24.16 which equals @{term"Suc(term_list_size ts)"}. We will now prove the termination condition and
24.17 continue with our definition. Below we return to the question of how
24.18 -\isacommand{recdef} ``knows'' about @{term map}.
24.19 +\isacommand{recdef} knows about @{term map}.
24.20 *};
24.21
24.22 (*<*)
25.1 --- a/doc-src/TutorialI/Recdef/Nested2.thy Thu Aug 09 10:17:45 2001 +0200
25.2 +++ b/doc-src/TutorialI/Recdef/Nested2.thy Thu Aug 09 18:12:15 2001 +0200
25.3 @@ -54,19 +54,19 @@
25.4
25.5 Let us now return to the question of how \isacommand{recdef} can come up with
25.6 sensible termination conditions in the presence of higher-order functions
25.7 -like @{term"map"}. For a start, if nothing were known about @{term"map"},
25.8 +like @{term"map"}. For a start, if nothing were known about @{term"map"}, then
25.9 @{term"map trev ts"} might apply @{term"trev"} to arbitrary terms, and thus
25.10 \isacommand{recdef} would try to prove the unprovable @{term"size t < Suc
25.11 (term_list_size ts)"}, without any assumption about @{term"t"}. Therefore
25.12 \isacommand{recdef} has been supplied with the congruence theorem
25.13 @{thm[source]map_cong}:
25.14 @{thm[display,margin=50]"map_cong"[no_vars]}
25.15 -Its second premise expresses (indirectly) that the first argument of
25.16 -@{term"map"} is only applied to elements of its second argument. Congruence
25.17 -rules for other higher-order functions on lists look very similar. If you get
25.18 +Its second premise expresses that in @{term"map f xs"},
25.19 +function @{term"f"} is only applied to elements of list @{term"xs"}. Congruence
25.20 +rules for other higher-order functions on lists are similar. If you get
25.21 into a situation where you need to supply \isacommand{recdef} with new
25.22 -congruence rules, you can either append a hint after the end of
25.23 -the recursion equations
25.24 +congruence rules, you can append a hint after the end of
25.25 +the recursion equations:
25.26 *}
25.27 (*<*)
25.28 consts dummy :: "nat => nat"
25.29 @@ -76,14 +76,14 @@
25.30 (hints recdef_cong: map_cong)
25.31
25.32 text{*\noindent
25.33 -or declare them globally
25.34 -by giving them the \attrdx{recdef_cong} attribute as in
25.35 +Or you can declare them globally
25.36 +by giving them the \attrdx{recdef_cong} attribute:
25.37 *}
25.38
25.39 declare map_cong[recdef_cong]
25.40
25.41 text{*
25.42 -Note that the @{text cong} and @{text recdef_cong} attributes are
25.43 +The @{text cong} and @{text recdef_cong} attributes are
25.44 intentionally kept apart because they control different activities, namely
25.45 simplification and making recursive definitions.
25.46 % The local @{text cong} in
26.1 --- a/doc-src/TutorialI/Recdef/document/Nested0.tex Thu Aug 09 10:17:45 2001 +0200
26.2 +++ b/doc-src/TutorialI/Recdef/document/Nested0.tex Thu Aug 09 18:12:15 2001 +0200
26.3 @@ -3,6 +3,7 @@
26.4 \def\isabellecontext{Nested{\isadigit{0}}}%
26.5 %
26.6 \begin{isamarkuptext}%
26.7 +\index{datatypes!nested}%
26.8 In \S\ref{sec:nested-datatype} we defined the datatype of terms%
26.9 \end{isamarkuptext}%
26.10 \isacommand{datatype}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}b{\isacharparenright}{\isachardoublequote}term{\isachardoublequote}\ {\isacharequal}\ Var\ {\isacharprime}a\ {\isacharbar}\ App\ {\isacharprime}b\ {\isachardoublequote}{\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}b{\isacharparenright}term\ list{\isachardoublequote}%
27.1 --- a/doc-src/TutorialI/Recdef/document/Nested1.tex Thu Aug 09 10:17:45 2001 +0200
27.2 +++ b/doc-src/TutorialI/Recdef/document/Nested1.tex Thu Aug 09 18:12:15 2001 +0200
27.3 @@ -27,15 +27,15 @@
27.4 where \isa{set} returns the set of elements of a list
27.5 and \isa{term{\isacharunderscore}list{\isacharunderscore}size\ {\isacharcolon}{\isacharcolon}\ term\ list\ {\isasymRightarrow}\ nat} is an auxiliary
27.6 function automatically defined by Isabelle
27.7 -(while processing the declaration of \isa{term}). First we have to understand why the
27.8 -recursive call of \isa{trev} underneath \isa{map} leads to the above
27.9 -condition. The reason is that \isacommand{recdef} ``knows'' that \isa{map}
27.10 +(while processing the declaration of \isa{term}). Why does the
27.11 +recursive call of \isa{trev} lead to this
27.12 +condition? Because \isacommand{recdef} knows that \isa{map}
27.13 will apply \isa{trev} only to elements of \isa{ts}. Thus the
27.14 condition expresses that the size of the argument \isa{t\ {\isasymin}\ set\ ts} of any
27.15 recursive call of \isa{trev} is strictly less than \isa{size\ {\isacharparenleft}App\ f\ ts{\isacharparenright}},
27.16 which equals \isa{Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}}. We will now prove the termination condition and
27.17 continue with our definition. Below we return to the question of how
27.18 -\isacommand{recdef} ``knows'' about \isa{map}.%
27.19 +\isacommand{recdef} knows about \isa{map}.%
27.20 \end{isamarkuptext}%
27.21 \end{isabellebody}%
27.22 %%% Local Variables:
28.1 --- a/doc-src/TutorialI/Recdef/document/Nested2.tex Thu Aug 09 10:17:45 2001 +0200
28.2 +++ b/doc-src/TutorialI/Recdef/document/Nested2.tex Thu Aug 09 18:12:15 2001 +0200
28.3 @@ -54,7 +54,7 @@
28.4
28.5 Let us now return to the question of how \isacommand{recdef} can come up with
28.6 sensible termination conditions in the presence of higher-order functions
28.7 -like \isa{map}. For a start, if nothing were known about \isa{map},
28.8 +like \isa{map}. For a start, if nothing were known about \isa{map}, then
28.9 \isa{map\ trev\ ts} might apply \isa{trev} to arbitrary terms, and thus
28.10 \isacommand{recdef} would try to prove the unprovable \isa{size\ t\ {\isacharless}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}}, without any assumption about \isa{t}. Therefore
28.11 \isacommand{recdef} has been supplied with the congruence theorem
28.12 @@ -63,22 +63,22 @@
28.13 \ \ \ \ \ {\isasymlbrakk}xs\ {\isacharequal}\ ys{\isacharsemicolon}\ {\isasymAnd}x{\isachardot}\ x\ {\isasymin}\ set\ ys\ {\isasymLongrightarrow}\ f\ x\ {\isacharequal}\ g\ x{\isasymrbrakk}\isanewline
28.14 \isaindent{\ \ \ \ \ }{\isasymLongrightarrow}\ map\ f\ xs\ {\isacharequal}\ map\ g\ ys%
28.15 \end{isabelle}
28.16 -Its second premise expresses (indirectly) that the first argument of
28.17 -\isa{map} is only applied to elements of its second argument. Congruence
28.18 -rules for other higher-order functions on lists look very similar. If you get
28.19 +Its second premise expresses that in \isa{map\ f\ xs},
28.20 +function \isa{f} is only applied to elements of list \isa{xs}. Congruence
28.21 +rules for other higher-order functions on lists are similar. If you get
28.22 into a situation where you need to supply \isacommand{recdef} with new
28.23 -congruence rules, you can either append a hint after the end of
28.24 -the recursion equations%
28.25 +congruence rules, you can append a hint after the end of
28.26 +the recursion equations:%
28.27 \end{isamarkuptext}%
28.28 {\isacharparenleft}\isakeyword{hints}\ recdef{\isacharunderscore}cong{\isacharcolon}\ map{\isacharunderscore}cong{\isacharparenright}%
28.29 \begin{isamarkuptext}%
28.30 \noindent
28.31 -or declare them globally
28.32 -by giving them the \attrdx{recdef_cong} attribute as in%
28.33 +Or you can declare them globally
28.34 +by giving them the \attrdx{recdef_cong} attribute:%
28.35 \end{isamarkuptext}%
28.36 \isacommand{declare}\ map{\isacharunderscore}cong{\isacharbrackleft}recdef{\isacharunderscore}cong{\isacharbrackright}%
28.37 \begin{isamarkuptext}%
28.38 -Note that the \isa{cong} and \isa{recdef{\isacharunderscore}cong} attributes are
28.39 +The \isa{cong} and \isa{recdef{\isacharunderscore}cong} attributes are
28.40 intentionally kept apart because they control different activities, namely
28.41 simplification and making recursive definitions.
28.42 % The local \isa{cong} in
29.1 --- a/doc-src/TutorialI/Rules/rules.tex Thu Aug 09 10:17:45 2001 +0200
29.2 +++ b/doc-src/TutorialI/Rules/rules.tex Thu Aug 09 18:12:15 2001 +0200
29.3 @@ -1311,7 +1311,6 @@
29.4 \isacommand{by}\ (auto\ intro:\ order_antisym)
29.5 \end{isabelle}
29.6
29.7 -\REMARK{refer to \isa{Main_wo_AC} if we introduce it}
29.8
29.9 \subsection{Indefinite Descriptions}
29.10
30.1 --- a/doc-src/TutorialI/Sets/sets.tex Thu Aug 09 10:17:45 2001 +0200
30.2 +++ b/doc-src/TutorialI/Sets/sets.tex Thu Aug 09 18:12:15 2001 +0200
30.3 @@ -913,7 +913,7 @@
30.4 ((x,y)\ \isasymin\ less_than)\ =\ (x\ <\ y)
30.5 \rulenamedx{less_than_iff}\isanewline
30.6 wf\ less_than
30.7 -\rulename{wf_less_than}
30.8 +\rulenamedx{wf_less_than}
30.9 \end{isabelle}
30.10
30.11 The notion of measure generalizes to the
30.12 @@ -928,7 +928,7 @@
30.13 \isasymin\ r\isacharbraceright
30.14 \rulenamedx{inv_image_def}\isanewline
30.15 wf\ r\ \isasymLongrightarrow\ wf\ (inv_image\ r\ f)
30.16 -\rulename{wf_inv_image}
30.17 +\rulenamedx{wf_inv_image}
30.18 \end{isabelle}
30.19
30.20 A measure function involves the natural numbers. The relation \isa{measure
30.21 @@ -939,7 +939,7 @@
30.22 measure\ \isasymequiv\ inv_image\ less_than%
30.23 \rulenamedx{measure_def}\isanewline
30.24 wf\ (measure\ f)
30.25 -\rulename{wf_measure}
30.26 +\rulenamedx{wf_measure}
30.27 \end{isabelle}
30.28
30.29 Of the other constructions, the most important is the
30.30 @@ -957,7 +957,7 @@
30.31 \rulenamedx{lex_prod_def}%
30.32 \par\smallskip
30.33 \isasymlbrakk wf\ ra;\ wf\ rb\isasymrbrakk \ \isasymLongrightarrow \ wf\ (ra\ <*lex*>\ rb)
30.34 -\rulename{wf_lex_prod}
30.35 +\rulenamedx{wf_lex_prod}
30.36 \end{isabelle}
30.37
30.38 These constructions can be used in a
31.1 --- a/doc-src/TutorialI/Types/Axioms.thy Thu Aug 09 10:17:45 2001 +0200
31.2 +++ b/doc-src/TutorialI/Types/Axioms.thy Thu Aug 09 18:12:15 2001 +0200
31.3 @@ -3,10 +3,10 @@
31.4 subsection{*Axioms*}
31.5
31.6
31.7 -text{*Now we want to attach axioms to our classes. Then we can reason on the
31.8 -level of classes and the results will be applicable to all types in a class,
31.9 -just as in axiomatic mathematics. These ideas are demonstrated by means of
31.10 -our above ordering relations.
31.11 +text{*Attaching axioms to our classes lets us reason on the
31.12 +level of classes. The results will be applicable to all types in a class,
31.13 +just as in axiomatic mathematics. These ideas are demonstrated by means of
31.14 +our ordering relations.
31.15 *}
31.16
31.17 subsubsection{*Partial Orders*}
31.18 @@ -26,13 +26,13 @@
31.19 The first three axioms are the familiar ones, and the final one
31.20 requires that @{text"<<"} and @{text"<<="} are related as expected.
31.21 Note that behind the scenes, Isabelle has restricted the axioms to class
31.22 -@{text parord}. For example, this is what @{thm[source]refl} really looks like:
31.23 +@{text parord}. For example, the axiom @{thm[source]refl} really is
31.24 @{thm[show_sorts]refl}.
31.25
31.26 We have not made @{thm[source]less_le} a global definition because it would
31.27 fix once and for all that @{text"<<"} is defined in terms of @{text"<<="} and
31.28 never the other way around. Below you will see why we want to avoid this
31.29 -asymmetry. The drawback of the above class is that
31.30 +asymmetry. The drawback of our choice is that
31.31 we need to define both @{text"<<="} and @{text"<<"} for each instance.
31.32
31.33 We can now prove simple theorems in this abstract setting, for example
31.34 @@ -42,14 +42,16 @@
31.35 lemma [simp]: "(x::'a::parord) << y \<Longrightarrow> (\<not> y << x) = True";
31.36
31.37 txt{*\noindent
31.38 -The conclusion is not simply @{prop"\<not> y << x"} because the preprocessor
31.39 -of the simplifier (see \S\ref{sec:simp-preprocessor})
31.40 -would turn this into @{prop"y << x = False"}, thus yielding
31.41 -a nonterminating rewrite rule. In the above form it is a generally useful
31.42 -rule.
31.43 +The conclusion is not just @{prop"\<not> y << x"} because the
31.44 +simplifier's preprocessor (see \S\ref{sec:simp-preprocessor})
31.45 +would turn it into @{prop"y << x = False"}, yielding
31.46 +a nonterminating rewrite rule.
31.47 +(It would be used to try to prove its own precondition \emph{ad
31.48 + infinitum}.)
31.49 +In the form above, the rule is useful.
31.50 The type constraint is necessary because otherwise Isabelle would only assume
31.51 -@{text"'a::ordrel"} (as required in the type of @{text"<<"}), in
31.52 -which case the proposition is not a theorem. The proof is easy:
31.53 +@{text"'a::ordrel"} (as required in the type of @{text"<<"}),
31.54 +when the proposition is not a theorem. The proof is easy:
31.55 *}
31.56
31.57 by(simp add:less_le antisym);
31.58 @@ -67,8 +69,9 @@
31.59 This time @{text intro_classes} leaves us with the four axioms,
31.60 specialized to type @{typ bool}, as subgoals:
31.61 @{subgoals[display,show_types,indent=0]}
31.62 -Fortunately, the proof is easy for blast, once we have unfolded the definitions
31.63 -of @{text"<<"} and @{text"<<="} at @{typ bool}:
31.64 +Fortunately, the proof is easy for \isa{blast}
31.65 +once we have unfolded the definitions
31.66 +of @{text"<<"} and @{text"<<="} at type @{typ bool}:
31.67 *}
31.68
31.69 apply(simp_all (no_asm_use) only: le_bool_def less_bool_def);
31.70 @@ -85,16 +88,15 @@
31.71
31.72 text{*\noindent
31.73 The effect is not stunning, but it demonstrates the principle. It also shows
31.74 -that tools like the simplifier can deal with generic rules as well.
31.75 -It should be clear that the main advantage of the axiomatic method is that
31.76 -theorems can be proved in the abstract and one does not need to repeat the
31.77 -proof for each instance.
31.78 +that tools like the simplifier can deal with generic rules.
31.79 +The main advantage of the axiomatic method is that
31.80 +theorems can be proved in the abstract and freely reused for each instance.
31.81 *}
31.82
31.83 subsubsection{*Linear Orders*}
31.84
31.85 text{* If any two elements of a partial order are comparable it is a
31.86 -\emph{linear} or \emph{total} order: *}
31.87 +\textbf{linear} or \textbf{total} order: *}
31.88
31.89 axclass linord < parord
31.90 linear: "x <<= y \<or> y <<= x"
31.91 @@ -112,9 +114,10 @@
31.92 done
31.93
31.94 text{*
31.95 -Linear orders are an example of subclassing by construction, which is the most
31.96 -common case. It is also possible to prove additional subclass relationships
31.97 -later on, i.e.\ subclassing by proof. This is the topic of the following
31.98 +Linear orders are an example of subclassing\index{subclasses}
31.99 +by construction, which is the most
31.100 +common case. Subclass relationships can also be proved.
31.101 +This is the topic of the following
31.102 paragraph.
31.103 *}
31.104
31.105 @@ -122,7 +125,7 @@
31.106
31.107 text{*
31.108 An alternative axiomatization of partial orders takes @{text"<<"} rather than
31.109 -@{text"<<="} as the primary concept. The result is a \emph{strict} order:
31.110 +@{text"<<="} as the primary concept. The result is a \textbf{strict} order:
31.111 *}
31.112
31.113 axclass strord < ordrel
31.114 @@ -173,7 +176,7 @@
31.115
31.116 text{*
31.117 A class may inherit from more than one direct superclass. This is called
31.118 -multiple inheritance and is certainly permitted. For example we could define
31.119 +\bfindex{multiple inheritance}. For example, we could define
31.120 the classes of well-founded orderings and well-orderings:
31.121 *}
31.122
31.123 @@ -220,13 +223,12 @@
31.124 This concludes our demonstration of type classes based on orderings. We
31.125 remind our readers that \isa{Main} contains a theory of
31.126 orderings phrased in terms of the usual @{text"\<le>"} and @{text"<"}.
31.127 -It is recommended that, if possible,
31.128 -you base your own ordering relations on this theory.
31.129 +If possible, base your own ordering relations on this theory.
31.130 *}
31.131
31.132 subsubsection{*Inconsistencies*}
31.133
31.134 -text{* The reader may be wondering what happens if we, maybe accidentally,
31.135 +text{* The reader may be wondering what happens if we
31.136 attach an inconsistent set of axioms to a class. So far we have always
31.137 avoided to add new axioms to HOL for fear of inconsistencies and suddenly it
31.138 seems that we are throwing all caution to the wind. So why is there no
32.1 --- a/doc-src/TutorialI/Types/Overloading.thy Thu Aug 09 10:17:45 2001 +0200
32.2 +++ b/doc-src/TutorialI/Types/Overloading.thy Thu Aug 09 18:12:15 2001 +0200
32.3 @@ -4,8 +4,9 @@
32.4
32.5 text{*\noindent
32.6 This \isacommand{instance} declaration can be read like the declaration of
32.7 -a function on types: the constructor @{text list} maps types of class @{text
32.8 -term}, i.e.\ all HOL types, to types of class @{text ordrel}, i.e.\
32.9 +a function on types. The constructor @{text list} maps types of class @{text
32.10 +term} (all HOL types), to types of class @{text ordrel};
32.11 +in other words,
32.12 if @{text"ty :: term"} then @{text"ty list :: ordrel"}.
32.13 Of course we should also define the meaning of @{text <<=} and
32.14 @{text <<} on lists:
33.1 --- a/doc-src/TutorialI/Types/Overloading0.thy Thu Aug 09 10:17:45 2001 +0200
33.2 +++ b/doc-src/TutorialI/Types/Overloading0.thy Thu Aug 09 18:12:15 2001 +0200
33.3 @@ -34,10 +34,8 @@
33.4
33.5 However, there is nothing to prevent the user from forming terms such as
33.6 @{text"inverse []"} and proving theorems such as @{text"inverse []
33.7 -= inverse []"}, although we never defined inverse on lists. We hasten to say
33.8 -that there is nothing wrong with such terms and theorems. But it would be
33.9 -nice if we could prevent their formation, simply because it is very likely
33.10 -that the user did not mean to write what he did. Thus she had better not waste
33.11 -her time pursuing it further. This requires the use of type classes.
33.12 += inverse []"} when inverse is not defined on lists. Proving theorems about
33.13 +undefined constants does not endanger soundness, but it is pointless.
33.14 +To prevent such terms from even being formed requires the use of type classes.
33.15 *}
33.16 (*<*)end(*>*)
34.1 --- a/doc-src/TutorialI/Types/Overloading1.thy Thu Aug 09 10:17:45 2001 +0200
34.2 +++ b/doc-src/TutorialI/Types/Overloading1.thy Thu Aug 09 18:12:15 2001 +0200
34.3 @@ -3,7 +3,7 @@
34.4 subsubsection{*Controlled Overloading with Type Classes*}
34.5
34.6 text{*
34.7 -We now start with the theory of ordering relations, which we want to phrase
34.8 +We now start with the theory of ordering relations, which we shall phrase
34.9 in terms of the two binary symbols @{text"<<"} and @{text"<<="}
34.10 to avoid clashes with @{text"<"} and @{text"\<le>"} in theory @{text
34.11 Main}. To restrict the application of @{text"<<"} and @{text"<<="} we
34.12 @@ -14,9 +14,9 @@
34.13
34.14 text{*\noindent
34.15 This introduces a new class @{text ordrel} and makes it a subclass of
34.16 -the predefined class @{text term}\footnote{The quotes around @{text term}
34.17 -simply avoid the clash with the command \isacommand{term}.} --- @{text term}
34.18 -is the class of all HOL types, like ``Object'' in Java.
34.19 +the predefined class @{text term}, which
34.20 +is the class of all HOL types.\footnote{The quotes around @{text term}
34.21 +simply avoid the clash with the command \isacommand{term}.}
34.22 This is a degenerate form of axiomatic type class without any axioms.
34.23 Its sole purpose is to restrict the use of overloaded constants to meaningful
34.24 instances:
34.25 @@ -30,7 +30,7 @@
34.26 constrained with a class; the constraint is propagated to the other
34.27 occurrences automatically.
34.28
34.29 -So far there is not a single type of class @{text ordrel}. To breathe life
34.30 +So far there are no types of class @{text ordrel}. To breathe life
34.31 into @{text ordrel} we need to declare a type to be an \bfindex{instance} of
34.32 @{text ordrel}:
34.33 *}
34.34 @@ -41,7 +41,7 @@
34.35 Command \isacommand{instance} actually starts a proof, namely that
34.36 @{typ bool} satisfies all axioms of @{text ordrel}.
34.37 There are none, but we still need to finish that proof, which we do
34.38 -by invoking a fixed predefined method:
34.39 +by invoking the \methdx{intro_classes} method:
34.40 *}
34.41
34.42 by intro_classes
34.43 @@ -59,12 +59,13 @@
34.44 less_bool_def: "P << Q \<equiv> \<not>P \<and> Q"
34.45
34.46 text{*\noindent
34.47 -Now @{prop"False <<= P"} is provable
34.48 +Now @{prop"False <<= P"} is provable:
34.49 *}
34.50
34.51 lemma "False <<= P"
34.52 by(simp add: le_bool_def)
34.53
34.54 text{*\noindent
34.55 -whereas @{text"[] <<= []"} is not even well-typed. In order to make it well-typed
34.56 +At this point, @{text"[] <<= []"} is not even well-typed.
34.57 +To make it well-typed,
34.58 we need to make lists a type of class @{text ordrel}:*}(*<*)end(*>*)
35.1 --- a/doc-src/TutorialI/Types/Overloading2.thy Thu Aug 09 10:17:45 2001 +0200
35.2 +++ b/doc-src/TutorialI/Types/Overloading2.thy Thu Aug 09 18:12:15 2001 +0200
35.3 @@ -16,6 +16,12 @@
35.4
35.5 text{*\noindent
35.6 The infix function @{text"!"} yields the nth element of a list.
35.7 +
35.8 +\begin{warn}
35.9 +A type constructor can be instantiated in only one way to
35.10 +a given type class. For example, our two instantiations of \isa{list} must
35.11 +reside in separate theories with disjoint scopes.\REMARK{Tobias, please check}
35.12 +\end{warn}
35.13 *}
35.14
35.15 subsubsection{*Predefined Overloading*}
35.16 @@ -24,7 +30,7 @@
35.17 HOL comes with a number of overloaded constants and corresponding classes.
35.18 The most important ones are listed in Table~\ref{tab:overloading} in the appendix. They are
35.19 defined on all numeric types and sometimes on other types as well, for example
35.20 -@{text"-"}, @{text"\<le>"} and @{text"<"} on sets.
35.21 +$-$ and @{text"\<le>"} on sets.
35.22
35.23 In addition there is a special input syntax for bounded quantifiers:
35.24 \begin{center}
36.1 --- a/doc-src/TutorialI/Types/Pairs.thy Thu Aug 09 10:17:45 2001 +0200
36.2 +++ b/doc-src/TutorialI/Types/Pairs.thy Thu Aug 09 18:12:15 2001 +0200
36.3 @@ -6,8 +6,8 @@
36.4 Ordered pairs were already introduced in \S\ref{sec:pairs}, but only with a minimal
36.5 repertoire of operations: pairing and the two projections @{term fst} and
36.6 @{term snd}. In any non-trivial application of pairs you will find that this
36.7 -quickly leads to unreadable formulae involving nests of projections. This
36.8 -section is concerned with introducing some syntactic sugar to overcome this
36.9 +quickly leads to unreadable nests of projections. This
36.10 +section introduces syntactic sugar to overcome this
36.11 problem: pattern matching with tuples.
36.12 *}
36.13
36.14 @@ -33,12 +33,11 @@
36.15 internal representation. Thus the internal and external form of a term may
36.16 differ, which can affect proofs. If you want to avoid this complication,
36.17 stick to @{term fst} and @{term snd} and write @{term"%p. fst p + snd p"}
36.18 -instead of @{text"\<lambda>(x,y). x+y"} (which denote the same function but are quite
36.19 -different terms).
36.20 +instead of @{text"\<lambda>(x,y). x+y"}. These terms are distinct even though they
36.21 +denote the same function.
36.22
36.23 Internally, @{term"%(x,y). t"} becomes @{text"split (\<lambda>x y. t)"}, where
36.24 -@{term split}\indexbold{*split (constant)}
36.25 -is the uncurrying function of type @{text"('a \<Rightarrow> 'b
36.26 +\cdx{split} is the uncurrying function of type @{text"('a \<Rightarrow> 'b
36.27 \<Rightarrow> 'c) \<Rightarrow> 'a \<times> 'b \<Rightarrow> 'c"} defined as
36.28 \begin{center}
36.29 @{thm split_def}
36.30 @@ -64,16 +63,17 @@
36.31 approach is neither elegant nor very practical in large examples, although it
36.32 can be effective in small ones.
36.33
36.34 -If we step back and ponder why the above lemma presented a problem in the
36.35 -first place, we quickly realize that what we would like is to replace @{term
36.36 -p} with some concrete pair @{term"(a,b)"}, in which case both sides of the
36.37 -equation would simplify to @{term a} because of the simplification rules
36.38 -@{thm split_conv[no_vars]} and @{thm fst_conv[no_vars]}. This is the
36.39 -key problem one faces when reasoning about pattern matching with pairs: how to
36.40 -convert some atomic term into a pair.
36.41 +If we consider why this lemma presents a problem,
36.42 +we quickly realize that we need to replace the variable~@{term
36.43 +p} by some pair @{term"(a,b)"}. Then both sides of the
36.44 +equation would simplify to @{term a} by the simplification rules
36.45 +@{thm split_conv[no_vars]} and @{thm fst_conv[no_vars]}.
36.46 +To reason about tuple patterns requires some way of
36.47 +converting a variable of product type into a pair.
36.48
36.49 In case of a subterm of the form @{term"split f p"} this is easy: the split
36.50 -rule @{thm[source]split_split} replaces @{term p} by a pair:
36.51 +rule @{thm[source]split_split} replaces @{term p} by a pair:%
36.52 +\index{*split (method)}
36.53 *}
36.54
36.55 lemma "(\<lambda>(x,y).y) p = snd p"
36.56 @@ -147,7 +147,7 @@
36.57
36.58 txt{*\noindent
36.59 @{subgoals[display,indent=0]}
36.60 -Again, @{text case_tac} is applicable because @{text"\<times>"} is a datatype.
36.61 +Again, \methdx{case_tac} is applicable because @{text"\<times>"} is a datatype.
36.62 The subgoal is easily proved by @{text simp}.
36.63
36.64 Splitting by @{text case_tac} also solves the previous examples and may thus
36.65 @@ -172,10 +172,13 @@
36.66
36.67 text{*\noindent
36.68 Note that we have intentionally included only @{thm[source]split_paired_all}
36.69 -in the first simplification step. This time the reason was not merely
36.70 +in the first simplification step, and then we simplify again.
36.71 +This time the reason was not merely
36.72 pedagogical:
36.73 -@{thm[source]split_paired_all} may interfere with certain congruence
36.74 -rules of the simplifier, i.e.
36.75 +@{thm[source]split_paired_all} may interfere with other functions
36.76 +of the simplifier.
36.77 +The following command could fail (here it does not)
36.78 +where two separate \isa{simp} applications succeed.
36.79 *}
36.80
36.81 (*<*)
36.82 @@ -184,18 +187,15 @@
36.83 apply(simp add:split_paired_all)
36.84 (*<*)done(*>*)
36.85 text{*\noindent
36.86 -may fail (here it does not) where the above two stages succeed.
36.87 -
36.88 -Finally, all @{text"\<forall>"} and @{text"\<exists>"}-quantified variables are split
36.89 -automatically by the simplifier:
36.90 +Finally, the simplifier automatically splits all @{text"\<forall>"} and
36.91 +@{text"\<exists>"}-quantified variables:
36.92 *}
36.93
36.94 lemma "\<forall>p. \<exists>q. swap p = swap q"
36.95 -apply simp;
36.96 -done
36.97 +by simp;
36.98
36.99 text{*\noindent
36.100 -In case you would like to turn off this automatic splitting, just disable the
36.101 +To turn off this automatic splitting, just disable the
36.102 responsible simplification rules:
36.103 \begin{center}
36.104 @{thm split_paired_All[no_vars]}
37.1 --- a/doc-src/TutorialI/Types/Typedef.thy Thu Aug 09 10:17:45 2001 +0200
37.2 +++ b/doc-src/TutorialI/Types/Typedef.thy Thu Aug 09 18:12:15 2001 +0200
37.3 @@ -5,8 +5,8 @@
37.4 text{*\label{sec:adv-typedef}
37.5 For most applications, a combination of predefined types like @{typ bool} and
37.6 @{text"\<Rightarrow>"} with recursive datatypes and records is quite sufficient. Very
37.7 -occasionally you may feel the need for a more advanced type. If you cannot do
37.8 -without that type, and you are certain that it is not definable by any of the
37.9 +occasionally you may feel the need for a more advanced type. If you
37.10 +are certain that your type is not definable by any of the
37.11 standard means, then read on.
37.12 \begin{warn}
37.13 Types in HOL must be non-empty; otherwise the quantifier rules would be
37.14 @@ -164,37 +164,25 @@
37.15 we merely need to prove that @{term A}, @{term B} and @{term C} are distinct
37.16 and that they exhaust the type.
37.17
37.18 -We start with a helpful version of injectivity of @{term Abs_three} on the
37.19 -representing subset:
37.20 -*}
37.21 -
37.22 -lemma [simp]:
37.23 - "\<lbrakk> x \<in> three; y \<in> three \<rbrakk> \<Longrightarrow> (Abs_three x = Abs_three y) = (x=y)";
37.24 -
37.25 -txt{*\noindent
37.26 -We prove each direction separately. From @{prop"Abs_three x = Abs_three y"}
37.27 -we use @{thm[source]arg_cong} to apply @{term Rep_three} to both sides,
37.28 -deriving @{prop[display]"Rep_three(Abs_three x) = Rep_three(Abs_three y)"}
37.29 -Thus we get the required @{prop"x =
37.30 -y"} by simplification with @{thm[source]Abs_three_inverse}.
37.31 -The other direction
37.32 -is trivial by simplification:
37.33 -*}
37.34 -
37.35 -apply(rule iffI);
37.36 - apply(drule_tac f = Rep_three in arg_cong);
37.37 - apply(simp add:Abs_three_inverse);
37.38 -by simp;
37.39 -
37.40 -text{*\noindent
37.41 -Analogous lemmas can be proved in the same way for arbitrary type definitions.
37.42 -
37.43 +In processing our \isacommand{typedef} declaration,
37.44 +Isabelle helpfully proves several lemmas.
37.45 +One, @{thm[source]Abs_three_inject},
37.46 +expresses that @{term Abs_three} is injective on the representing subset:
37.47 +\begin{center}
37.48 +@{thm Abs_three_inject[no_vars]}
37.49 +\end{center}
37.50 +Another, @{thm[source]Rep_three_inject}, expresses that the representation
37.51 +function is also injective:
37.52 +\begin{center}
37.53 +@{thm Rep_three_inject[no_vars]}
37.54 +\end{center}
37.55 Distinctness of @{term A}, @{term B} and @{term C} follows immediately
37.56 -if we expand their definitions and rewrite with the above simplification rule:
37.57 +if we expand their definitions and rewrite with the injectivity
37.58 +of @{term Abs_three}:
37.59 *}
37.60
37.61 lemma "A \<noteq> B \<and> B \<noteq> A \<and> A \<noteq> C \<and> C \<noteq> A \<and> B \<noteq> C \<and> C \<noteq> B";
37.62 -by(simp add:A_def B_def C_def three_def);
37.63 +by(simp add: Abs_three_inject A_def B_def C_def three_def);
37.64
37.65 text{*\noindent
37.66 Of course we rely on the simplifier to solve goals like @{prop"0 \<noteq> 1"}.
37.67 @@ -221,7 +209,8 @@
37.68 done
37.69
37.70 text{*
37.71 -Now the case distinction lemma on type @{typ three} is easy to derive if you know how to:
37.72 +Now the case distinction lemma on type @{typ three} is easy to derive if you
37.73 +know how:
37.74 *}
37.75
37.76 lemma three_cases: "\<lbrakk> P A; P B; P C \<rbrakk> \<Longrightarrow> P x";
38.1 --- a/doc-src/TutorialI/Types/document/Axioms.tex Thu Aug 09 10:17:45 2001 +0200
38.2 +++ b/doc-src/TutorialI/Types/document/Axioms.tex Thu Aug 09 18:12:15 2001 +0200
38.3 @@ -6,10 +6,10 @@
38.4 }
38.5 %
38.6 \begin{isamarkuptext}%
38.7 -Now we want to attach axioms to our classes. Then we can reason on the
38.8 -level of classes and the results will be applicable to all types in a class,
38.9 -just as in axiomatic mathematics. These ideas are demonstrated by means of
38.10 -our above ordering relations.%
38.11 +Attaching axioms to our classes lets us reason on the
38.12 +level of classes. The results will be applicable to all types in a class,
38.13 +just as in axiomatic mathematics. These ideas are demonstrated by means of
38.14 +our ordering relations.%
38.15 \end{isamarkuptext}%
38.16 %
38.17 \isamarkupsubsubsection{Partial Orders%
38.18 @@ -29,13 +29,13 @@
38.19 The first three axioms are the familiar ones, and the final one
38.20 requires that \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} are related as expected.
38.21 Note that behind the scenes, Isabelle has restricted the axioms to class
38.22 -\isa{parord}. For example, this is what \isa{refl} really looks like:
38.23 +\isa{parord}. For example, the axiom \isa{refl} really is
38.24 \isa{{\isacharparenleft}{\isacharquery}x{\isasymColon}{\isacharquery}{\isacharprime}a{\isasymColon}parord{\isacharparenright}\ {\isacharless}{\isacharless}{\isacharequal}\ {\isacharquery}x}.
38.25
38.26 We have not made \isa{less{\isacharunderscore}le} a global definition because it would
38.27 fix once and for all that \isa{{\isacharless}{\isacharless}} is defined in terms of \isa{{\isacharless}{\isacharless}{\isacharequal}} and
38.28 never the other way around. Below you will see why we want to avoid this
38.29 -asymmetry. The drawback of the above class is that
38.30 +asymmetry. The drawback of our choice is that
38.31 we need to define both \isa{{\isacharless}{\isacharless}{\isacharequal}} and \isa{{\isacharless}{\isacharless}} for each instance.
38.32
38.33 We can now prove simple theorems in this abstract setting, for example
38.34 @@ -44,14 +44,16 @@
38.35 \isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}x{\isacharcolon}{\isacharcolon}{\isacharprime}a{\isacharcolon}{\isacharcolon}parord{\isacharparenright}\ {\isacharless}{\isacharless}\ y\ {\isasymLongrightarrow}\ {\isacharparenleft}{\isasymnot}\ y\ {\isacharless}{\isacharless}\ x{\isacharparenright}\ {\isacharequal}\ True{\isachardoublequote}%
38.36 \begin{isamarkuptxt}%
38.37 \noindent
38.38 -The conclusion is not simply \isa{{\isasymnot}\ y\ {\isacharless}{\isacharless}\ x} because the preprocessor
38.39 -of the simplifier (see \S\ref{sec:simp-preprocessor})
38.40 -would turn this into \isa{{\isacharparenleft}y\ {\isacharless}{\isacharless}\ x{\isacharparenright}\ {\isacharequal}\ False}, thus yielding
38.41 -a nonterminating rewrite rule. In the above form it is a generally useful
38.42 -rule.
38.43 +The conclusion is not just \isa{{\isasymnot}\ y\ {\isacharless}{\isacharless}\ x} because the
38.44 +simplifier's preprocessor (see \S\ref{sec:simp-preprocessor})
38.45 +would turn it into \isa{{\isacharparenleft}y\ {\isacharless}{\isacharless}\ x{\isacharparenright}\ {\isacharequal}\ False}, yielding
38.46 +a nonterminating rewrite rule.
38.47 +(It would be used to try to prove its own precondition \emph{ad
38.48 + infinitum}.)
38.49 +In the form above, the rule is useful.
38.50 The type constraint is necessary because otherwise Isabelle would only assume
38.51 -\isa{{\isacharprime}a{\isacharcolon}{\isacharcolon}ordrel} (as required in the type of \isa{{\isacharless}{\isacharless}}), in
38.52 -which case the proposition is not a theorem. The proof is easy:%
38.53 +\isa{{\isacharprime}a{\isacharcolon}{\isacharcolon}ordrel} (as required in the type of \isa{{\isacharless}{\isacharless}}),
38.54 +when the proposition is not a theorem. The proof is easy:%
38.55 \end{isamarkuptxt}%
38.56 \isacommand{by}{\isacharparenleft}simp\ add{\isacharcolon}less{\isacharunderscore}le\ antisym{\isacharparenright}%
38.57 \begin{isamarkuptext}%
38.58 @@ -73,8 +75,9 @@
38.59 \ {\isadigit{3}}{\isachardot}\ {\isasymAnd}{\isacharparenleft}x{\isasymColon}bool{\isacharparenright}\ y{\isasymColon}bool{\isachardot}\ {\isasymlbrakk}x\ {\isacharless}{\isacharless}{\isacharequal}\ y{\isacharsemicolon}\ y\ {\isacharless}{\isacharless}{\isacharequal}\ x{\isasymrbrakk}\ {\isasymLongrightarrow}\ x\ {\isacharequal}\ y\isanewline
38.60 \ {\isadigit{4}}{\isachardot}\ {\isasymAnd}{\isacharparenleft}x{\isasymColon}bool{\isacharparenright}\ y{\isasymColon}bool{\isachardot}\ {\isacharparenleft}x\ {\isacharless}{\isacharless}\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}x\ {\isacharless}{\isacharless}{\isacharequal}\ y\ {\isasymand}\ x\ {\isasymnoteq}\ y{\isacharparenright}%
38.61 \end{isabelle}
38.62 -Fortunately, the proof is easy for blast, once we have unfolded the definitions
38.63 -of \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} at \isa{bool}:%
38.64 +Fortunately, the proof is easy for \isa{blast}
38.65 +once we have unfolded the definitions
38.66 +of \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} at type \isa{bool}:%
38.67 \end{isamarkuptxt}%
38.68 \isacommand{apply}{\isacharparenleft}simp{\isacharunderscore}all\ {\isacharparenleft}no{\isacharunderscore}asm{\isacharunderscore}use{\isacharparenright}\ only{\isacharcolon}\ le{\isacharunderscore}bool{\isacharunderscore}def\ less{\isacharunderscore}bool{\isacharunderscore}def{\isacharparenright}\isanewline
38.69 \isacommand{by}{\isacharparenleft}blast{\isacharcomma}\ blast{\isacharcomma}\ blast{\isacharcomma}\ blast{\isacharparenright}%
38.70 @@ -89,10 +92,9 @@
38.71 \begin{isamarkuptext}%
38.72 \noindent
38.73 The effect is not stunning, but it demonstrates the principle. It also shows
38.74 -that tools like the simplifier can deal with generic rules as well.
38.75 -It should be clear that the main advantage of the axiomatic method is that
38.76 -theorems can be proved in the abstract and one does not need to repeat the
38.77 -proof for each instance.%
38.78 +that tools like the simplifier can deal with generic rules.
38.79 +The main advantage of the axiomatic method is that
38.80 +theorems can be proved in the abstract and freely reused for each instance.%
38.81 \end{isamarkuptext}%
38.82 %
38.83 \isamarkupsubsubsection{Linear Orders%
38.84 @@ -100,7 +102,7 @@
38.85 %
38.86 \begin{isamarkuptext}%
38.87 If any two elements of a partial order are comparable it is a
38.88 -\emph{linear} or \emph{total} order:%
38.89 +\textbf{linear} or \textbf{total} order:%
38.90 \end{isamarkuptext}%
38.91 \isacommand{axclass}\ linord\ {\isacharless}\ parord\isanewline
38.92 linear{\isacharcolon}\ {\isachardoublequote}x\ {\isacharless}{\isacharless}{\isacharequal}\ y\ {\isasymor}\ y\ {\isacharless}{\isacharless}{\isacharequal}\ x{\isachardoublequote}%
38.93 @@ -116,9 +118,10 @@
38.94 \isacommand{apply}\ blast\isanewline
38.95 \isacommand{done}%
38.96 \begin{isamarkuptext}%
38.97 -Linear orders are an example of subclassing by construction, which is the most
38.98 -common case. It is also possible to prove additional subclass relationships
38.99 -later on, i.e.\ subclassing by proof. This is the topic of the following
38.100 +Linear orders are an example of subclassing\index{subclasses}
38.101 +by construction, which is the most
38.102 +common case. Subclass relationships can also be proved.
38.103 +This is the topic of the following
38.104 paragraph.%
38.105 \end{isamarkuptext}%
38.106 %
38.107 @@ -127,7 +130,7 @@
38.108 %
38.109 \begin{isamarkuptext}%
38.110 An alternative axiomatization of partial orders takes \isa{{\isacharless}{\isacharless}} rather than
38.111 -\isa{{\isacharless}{\isacharless}{\isacharequal}} as the primary concept. The result is a \emph{strict} order:%
38.112 +\isa{{\isacharless}{\isacharless}{\isacharequal}} as the primary concept. The result is a \textbf{strict} order:%
38.113 \end{isamarkuptext}%
38.114 \isacommand{axclass}\ strord\ {\isacharless}\ ordrel\isanewline
38.115 irrefl{\isacharcolon}\ \ \ \ \ {\isachardoublequote}{\isasymnot}\ x\ {\isacharless}{\isacharless}\ x{\isachardoublequote}\isanewline
38.116 @@ -167,7 +170,7 @@
38.117 %
38.118 \begin{isamarkuptext}%
38.119 A class may inherit from more than one direct superclass. This is called
38.120 -multiple inheritance and is certainly permitted. For example we could define
38.121 +\bfindex{multiple inheritance}. For example, we could define
38.122 the classes of well-founded orderings and well-orderings:%
38.123 \end{isamarkuptext}%
38.124 \isacommand{axclass}\ wford\ {\isacharless}\ parord\isanewline
38.125 @@ -212,15 +215,14 @@
38.126 This concludes our demonstration of type classes based on orderings. We
38.127 remind our readers that \isa{Main} contains a theory of
38.128 orderings phrased in terms of the usual \isa{{\isasymle}} and \isa{{\isacharless}}.
38.129 -It is recommended that, if possible,
38.130 -you base your own ordering relations on this theory.%
38.131 +If possible, base your own ordering relations on this theory.%
38.132 \end{isamarkuptext}%
38.133 %
38.134 \isamarkupsubsubsection{Inconsistencies%
38.135 }
38.136 %
38.137 \begin{isamarkuptext}%
38.138 -The reader may be wondering what happens if we, maybe accidentally,
38.139 +The reader may be wondering what happens if we
38.140 attach an inconsistent set of axioms to a class. So far we have always
38.141 avoided to add new axioms to HOL for fear of inconsistencies and suddenly it
38.142 seems that we are throwing all caution to the wind. So why is there no
39.1 --- a/doc-src/TutorialI/Types/document/Overloading.tex Thu Aug 09 10:17:45 2001 +0200
39.2 +++ b/doc-src/TutorialI/Types/document/Overloading.tex Thu Aug 09 18:12:15 2001 +0200
39.3 @@ -6,7 +6,8 @@
39.4 \begin{isamarkuptext}%
39.5 \noindent
39.6 This \isacommand{instance} declaration can be read like the declaration of
39.7 -a function on types: the constructor \isa{list} maps types of class \isa{term}, i.e.\ all HOL types, to types of class \isa{ordrel}, i.e.\
39.8 +a function on types. The constructor \isa{list} maps types of class \isa{term} (all HOL types), to types of class \isa{ordrel};
39.9 +in other words,
39.10 if \isa{ty\ {\isacharcolon}{\isacharcolon}\ term} then \isa{ty\ list\ {\isacharcolon}{\isacharcolon}\ ordrel}.
39.11 Of course we should also define the meaning of \isa{{\isacharless}{\isacharless}{\isacharequal}} and
39.12 \isa{{\isacharless}{\isacharless}} on lists:%
40.1 --- a/doc-src/TutorialI/Types/document/Overloading0.tex Thu Aug 09 10:17:45 2001 +0200
40.2 +++ b/doc-src/TutorialI/Types/document/Overloading0.tex Thu Aug 09 18:12:15 2001 +0200
40.3 @@ -37,11 +37,9 @@
40.4 warnings to that effect.
40.5
40.6 However, there is nothing to prevent the user from forming terms such as
40.7 -\isa{inverse\ {\isacharbrackleft}{\isacharbrackright}} and proving theorems such as \isa{inverse\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ inverse\ {\isacharbrackleft}{\isacharbrackright}}, although we never defined inverse on lists. We hasten to say
40.8 -that there is nothing wrong with such terms and theorems. But it would be
40.9 -nice if we could prevent their formation, simply because it is very likely
40.10 -that the user did not mean to write what he did. Thus she had better not waste
40.11 -her time pursuing it further. This requires the use of type classes.%
40.12 +\isa{inverse\ {\isacharbrackleft}{\isacharbrackright}} and proving theorems such as \isa{inverse\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ inverse\ {\isacharbrackleft}{\isacharbrackright}} when inverse is not defined on lists. Proving theorems about
40.13 +undefined constants does not endanger soundness, but it is pointless.
40.14 +To prevent such terms from even being formed requires the use of type classes.%
40.15 \end{isamarkuptext}%
40.16 \end{isabellebody}%
40.17 %%% Local Variables:
41.1 --- a/doc-src/TutorialI/Types/document/Overloading1.tex Thu Aug 09 10:17:45 2001 +0200
41.2 +++ b/doc-src/TutorialI/Types/document/Overloading1.tex Thu Aug 09 18:12:15 2001 +0200
41.3 @@ -6,7 +6,7 @@
41.4 }
41.5 %
41.6 \begin{isamarkuptext}%
41.7 -We now start with the theory of ordering relations, which we want to phrase
41.8 +We now start with the theory of ordering relations, which we shall phrase
41.9 in terms of the two binary symbols \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}}
41.10 to avoid clashes with \isa{{\isacharless}} and \isa{{\isasymle}} in theory \isa{Main}. To restrict the application of \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} we
41.11 introduce the class \isa{ordrel}:%
41.12 @@ -15,9 +15,9 @@
41.13 \begin{isamarkuptext}%
41.14 \noindent
41.15 This introduces a new class \isa{ordrel} and makes it a subclass of
41.16 -the predefined class \isa{term}\footnote{The quotes around \isa{term}
41.17 -simply avoid the clash with the command \isacommand{term}.} --- \isa{term}
41.18 -is the class of all HOL types, like ``Object'' in Java.
41.19 +the predefined class \isa{term}, which
41.20 +is the class of all HOL types.\footnote{The quotes around \isa{term}
41.21 +simply avoid the clash with the command \isacommand{term}.}
41.22 This is a degenerate form of axiomatic type class without any axioms.
41.23 Its sole purpose is to restrict the use of overloaded constants to meaningful
41.24 instances:%
41.25 @@ -30,7 +30,7 @@
41.26 constrained with a class; the constraint is propagated to the other
41.27 occurrences automatically.
41.28
41.29 -So far there is not a single type of class \isa{ordrel}. To breathe life
41.30 +So far there are no types of class \isa{ordrel}. To breathe life
41.31 into \isa{ordrel} we need to declare a type to be an \bfindex{instance} of
41.32 \isa{ordrel}:%
41.33 \end{isamarkuptext}%
41.34 @@ -40,7 +40,7 @@
41.35 Command \isacommand{instance} actually starts a proof, namely that
41.36 \isa{bool} satisfies all axioms of \isa{ordrel}.
41.37 There are none, but we still need to finish that proof, which we do
41.38 -by invoking a fixed predefined method:%
41.39 +by invoking the \methdx{intro_classes} method:%
41.40 \end{isamarkuptxt}%
41.41 \isacommand{by}\ intro{\isacharunderscore}classes%
41.42 \begin{isamarkuptext}%
41.43 @@ -56,13 +56,14 @@
41.44 less{\isacharunderscore}bool{\isacharunderscore}def{\isacharcolon}\ {\isachardoublequote}P\ {\isacharless}{\isacharless}\ Q\ {\isasymequiv}\ {\isasymnot}P\ {\isasymand}\ Q{\isachardoublequote}%
41.45 \begin{isamarkuptext}%
41.46 \noindent
41.47 -Now \isa{False\ {\isacharless}{\isacharless}{\isacharequal}\ P} is provable%
41.48 +Now \isa{False\ {\isacharless}{\isacharless}{\isacharequal}\ P} is provable:%
41.49 \end{isamarkuptext}%
41.50 \isacommand{lemma}\ {\isachardoublequote}False\ {\isacharless}{\isacharless}{\isacharequal}\ P{\isachardoublequote}\isanewline
41.51 \isacommand{by}{\isacharparenleft}simp\ add{\isacharcolon}\ le{\isacharunderscore}bool{\isacharunderscore}def{\isacharparenright}%
41.52 \begin{isamarkuptext}%
41.53 \noindent
41.54 -whereas \isa{{\isacharbrackleft}{\isacharbrackright}\ {\isacharless}{\isacharless}{\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}} is not even well-typed. In order to make it well-typed
41.55 +At this point, \isa{{\isacharbrackleft}{\isacharbrackright}\ {\isacharless}{\isacharless}{\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}} is not even well-typed.
41.56 +To make it well-typed,
41.57 we need to make lists a type of class \isa{ordrel}:%
41.58 \end{isamarkuptext}%
41.59 \end{isabellebody}%
42.1 --- a/doc-src/TutorialI/Types/document/Overloading2.tex Thu Aug 09 10:17:45 2001 +0200
42.2 +++ b/doc-src/TutorialI/Types/document/Overloading2.tex Thu Aug 09 18:12:15 2001 +0200
42.3 @@ -16,7 +16,13 @@
42.4 \ \ \ \ \ \ \ \ \ \ \ \ \ \ size\ xs\ {\isacharequal}\ size\ ys\ {\isasymand}\ {\isacharparenleft}{\isasymforall}i{\isacharless}size\ xs{\isachardot}\ xs{\isacharbang}i\ {\isacharless}{\isacharless}{\isacharequal}\ ys{\isacharbang}i{\isacharparenright}{\isachardoublequote}%
42.5 \begin{isamarkuptext}%
42.6 \noindent
42.7 -The infix function \isa{{\isacharbang}} yields the nth element of a list.%
42.8 +The infix function \isa{{\isacharbang}} yields the nth element of a list.
42.9 +
42.10 +\begin{warn}
42.11 +A type constructor can be instantiated in only one way to
42.12 +a given type class. For example, our two instantiations of \isa{list} must
42.13 +reside in separate theories with disjoint scopes.\REMARK{Tobias, please check}
42.14 +\end{warn}%
42.15 \end{isamarkuptext}%
42.16 %
42.17 \isamarkupsubsubsection{Predefined Overloading%
42.18 @@ -26,7 +32,7 @@
42.19 HOL comes with a number of overloaded constants and corresponding classes.
42.20 The most important ones are listed in Table~\ref{tab:overloading} in the appendix. They are
42.21 defined on all numeric types and sometimes on other types as well, for example
42.22 -\isa{{\isacharminus}}, \isa{{\isasymle}} and \isa{{\isacharless}} on sets.
42.23 +$-$ and \isa{{\isasymle}} on sets.
42.24
42.25 In addition there is a special input syntax for bounded quantifiers:
42.26 \begin{center}
43.1 --- a/doc-src/TutorialI/Types/document/Pairs.tex Thu Aug 09 10:17:45 2001 +0200
43.2 +++ b/doc-src/TutorialI/Types/document/Pairs.tex Thu Aug 09 18:12:15 2001 +0200
43.3 @@ -10,8 +10,8 @@
43.4 Ordered pairs were already introduced in \S\ref{sec:pairs}, but only with a minimal
43.5 repertoire of operations: pairing and the two projections \isa{fst} and
43.6 \isa{snd}. In any non-trivial application of pairs you will find that this
43.7 -quickly leads to unreadable formulae involving nests of projections. This
43.8 -section is concerned with introducing some syntactic sugar to overcome this
43.9 +quickly leads to unreadable nests of projections. This
43.10 +section introduces syntactic sugar to overcome this
43.11 problem: pattern matching with tuples.%
43.12 \end{isamarkuptext}%
43.13 %
43.14 @@ -38,12 +38,11 @@
43.15 internal representation. Thus the internal and external form of a term may
43.16 differ, which can affect proofs. If you want to avoid this complication,
43.17 stick to \isa{fst} and \isa{snd} and write \isa{{\isasymlambda}p{\isachardot}\ fst\ p\ {\isacharplus}\ snd\ p}
43.18 -instead of \isa{{\isasymlambda}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}\ x{\isacharplus}y} (which denote the same function but are quite
43.19 -different terms).
43.20 +instead of \isa{{\isasymlambda}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}\ x{\isacharplus}y}. These terms are distinct even though they
43.21 +denote the same function.
43.22
43.23 Internally, \isa{{\isasymlambda}{\isacharparenleft}x{\isacharcomma}\ y{\isacharparenright}{\isachardot}\ t} becomes \isa{split\ {\isacharparenleft}{\isasymlambda}x\ y{\isachardot}\ t{\isacharparenright}}, where
43.24 -\isa{split}\indexbold{*split (constant)}
43.25 -is the uncurrying function of type \isa{{\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}b\ {\isasymRightarrow}\ {\isacharprime}c{\isacharparenright}\ {\isasymRightarrow}\ {\isacharprime}a\ {\isasymtimes}\ {\isacharprime}b\ {\isasymRightarrow}\ {\isacharprime}c} defined as
43.26 +\cdx{split} is the uncurrying function of type \isa{{\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}b\ {\isasymRightarrow}\ {\isacharprime}c{\isacharparenright}\ {\isasymRightarrow}\ {\isacharprime}a\ {\isasymtimes}\ {\isacharprime}b\ {\isasymRightarrow}\ {\isacharprime}c} defined as
43.27 \begin{center}
43.28 \isa{split\ {\isasymequiv}\ {\isasymlambda}c\ p{\isachardot}\ c\ {\isacharparenleft}fst\ p{\isacharparenright}\ {\isacharparenleft}snd\ p{\isacharparenright}}
43.29 \hfill(\isa{split{\isacharunderscore}def})
43.30 @@ -68,15 +67,16 @@
43.31 approach is neither elegant nor very practical in large examples, although it
43.32 can be effective in small ones.
43.33
43.34 -If we step back and ponder why the above lemma presented a problem in the
43.35 -first place, we quickly realize that what we would like is to replace \isa{p} with some concrete pair \isa{{\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}}, in which case both sides of the
43.36 -equation would simplify to \isa{a} because of the simplification rules
43.37 -\isa{split\ c\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}\ {\isacharequal}\ c\ a\ b} and \isa{fst\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}\ {\isacharequal}\ a}. This is the
43.38 -key problem one faces when reasoning about pattern matching with pairs: how to
43.39 -convert some atomic term into a pair.
43.40 +If we consider why this lemma presents a problem,
43.41 +we quickly realize that we need to replace the variable~\isa{p} by some pair \isa{{\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}}. Then both sides of the
43.42 +equation would simplify to \isa{a} by the simplification rules
43.43 +\isa{split\ c\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}\ {\isacharequal}\ c\ a\ b} and \isa{fst\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}\ {\isacharequal}\ a}.
43.44 +To reason about tuple patterns requires some way of
43.45 +converting a variable of product type into a pair.
43.46
43.47 In case of a subterm of the form \isa{split\ f\ p} this is easy: the split
43.48 rule \isa{split{\isacharunderscore}split} replaces \isa{p} by a pair:%
43.49 +\index{*split (method)}%
43.50 \end{isamarkuptext}%
43.51 \isacommand{lemma}\ {\isachardoublequote}{\isacharparenleft}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}y{\isacharparenright}\ p\ {\isacharequal}\ snd\ p{\isachardoublequote}\isanewline
43.52 \isacommand{apply}{\isacharparenleft}split\ split{\isacharunderscore}split{\isacharparenright}%
43.53 @@ -142,7 +142,7 @@
43.54 \begin{isabelle}%
43.55 \ {\isadigit{1}}{\isachardot}\ {\isasymAnd}a\ b{\isachardot}\ p\ {\isacharequal}\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}\ {\isasymLongrightarrow}\ swap\ {\isacharparenleft}swap\ p{\isacharparenright}\ {\isacharequal}\ p%
43.56 \end{isabelle}
43.57 -Again, \isa{case{\isacharunderscore}tac} is applicable because \isa{{\isasymtimes}} is a datatype.
43.58 +Again, \methdx{case_tac} is applicable because \isa{{\isasymtimes}} is a datatype.
43.59 The subgoal is easily proved by \isa{simp}.
43.60
43.61 Splitting by \isa{case{\isacharunderscore}tac} also solves the previous examples and may thus
43.62 @@ -167,25 +167,25 @@
43.63 \begin{isamarkuptext}%
43.64 \noindent
43.65 Note that we have intentionally included only \isa{split{\isacharunderscore}paired{\isacharunderscore}all}
43.66 -in the first simplification step. This time the reason was not merely
43.67 +in the first simplification step, and then we simplify again.
43.68 +This time the reason was not merely
43.69 pedagogical:
43.70 -\isa{split{\isacharunderscore}paired{\isacharunderscore}all} may interfere with certain congruence
43.71 -rules of the simplifier, i.e.%
43.72 +\isa{split{\isacharunderscore}paired{\isacharunderscore}all} may interfere with other functions
43.73 +of the simplifier.
43.74 +The following command could fail (here it does not)
43.75 +where two separate \isa{simp} applications succeed.%
43.76 \end{isamarkuptext}%
43.77 \isacommand{apply}{\isacharparenleft}simp\ add{\isacharcolon}split{\isacharunderscore}paired{\isacharunderscore}all{\isacharparenright}%
43.78 \begin{isamarkuptext}%
43.79 \noindent
43.80 -may fail (here it does not) where the above two stages succeed.
43.81 -
43.82 -Finally, all \isa{{\isasymforall}} and \isa{{\isasymexists}}-quantified variables are split
43.83 -automatically by the simplifier:%
43.84 +Finally, the simplifier automatically splits all \isa{{\isasymforall}} and
43.85 +\isa{{\isasymexists}}-quantified variables:%
43.86 \end{isamarkuptext}%
43.87 \isacommand{lemma}\ {\isachardoublequote}{\isasymforall}p{\isachardot}\ {\isasymexists}q{\isachardot}\ swap\ p\ {\isacharequal}\ swap\ q{\isachardoublequote}\isanewline
43.88 -\isacommand{apply}\ simp\isanewline
43.89 -\isacommand{done}%
43.90 +\isacommand{by}\ simp%
43.91 \begin{isamarkuptext}%
43.92 \noindent
43.93 -In case you would like to turn off this automatic splitting, just disable the
43.94 +To turn off this automatic splitting, just disable the
43.95 responsible simplification rules:
43.96 \begin{center}
43.97 \isa{{\isacharparenleft}{\isasymforall}x{\isachardot}\ P\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}{\isasymforall}a\ b{\isachardot}\ P\ {\isacharparenleft}a{\isacharcomma}\ b{\isacharparenright}{\isacharparenright}}
44.1 --- a/doc-src/TutorialI/Types/document/Typedef.tex Thu Aug 09 10:17:45 2001 +0200
44.2 +++ b/doc-src/TutorialI/Types/document/Typedef.tex Thu Aug 09 18:12:15 2001 +0200
44.3 @@ -9,8 +9,8 @@
44.4 \label{sec:adv-typedef}
44.5 For most applications, a combination of predefined types like \isa{bool} and
44.6 \isa{{\isasymRightarrow}} with recursive datatypes and records is quite sufficient. Very
44.7 -occasionally you may feel the need for a more advanced type. If you cannot do
44.8 -without that type, and you are certain that it is not definable by any of the
44.9 +occasionally you may feel the need for a more advanced type. If you
44.10 +are certain that your type is not definable by any of the
44.11 standard means, then read on.
44.12 \begin{warn}
44.13 Types in HOL must be non-empty; otherwise the quantifier rules would be
44.14 @@ -165,35 +165,24 @@
44.15 we merely need to prove that \isa{A}, \isa{B} and \isa{C} are distinct
44.16 and that they exhaust the type.
44.17
44.18 -We start with a helpful version of injectivity of \isa{Abs{\isacharunderscore}three} on the
44.19 -representing subset:%
44.20 -\end{isamarkuptext}%
44.21 -\isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\isanewline
44.22 -\ {\isachardoublequote}{\isasymlbrakk}\ x\ {\isasymin}\ three{\isacharsemicolon}\ y\ {\isasymin}\ three\ {\isasymrbrakk}\ {\isasymLongrightarrow}\ {\isacharparenleft}Abs{\isacharunderscore}three\ x\ {\isacharequal}\ Abs{\isacharunderscore}three\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}x{\isacharequal}y{\isacharparenright}{\isachardoublequote}%
44.23 -\begin{isamarkuptxt}%
44.24 -\noindent
44.25 -We prove each direction separately. From \isa{Abs{\isacharunderscore}three\ x\ {\isacharequal}\ Abs{\isacharunderscore}three\ y}
44.26 -we use \isa{arg{\isacharunderscore}cong} to apply \isa{Rep{\isacharunderscore}three} to both sides,
44.27 -deriving \begin{isabelle}%
44.28 -Rep{\isacharunderscore}three\ {\isacharparenleft}Abs{\isacharunderscore}three\ x{\isacharparenright}\ {\isacharequal}\ Rep{\isacharunderscore}three\ {\isacharparenleft}Abs{\isacharunderscore}three\ y{\isacharparenright}%
44.29 -\end{isabelle}
44.30 -Thus we get the required \isa{x\ {\isacharequal}\ y} by simplification with \isa{Abs{\isacharunderscore}three{\isacharunderscore}inverse}.
44.31 -The other direction
44.32 -is trivial by simplification:%
44.33 -\end{isamarkuptxt}%
44.34 -\isacommand{apply}{\isacharparenleft}rule\ iffI{\isacharparenright}\isanewline
44.35 -\ \isacommand{apply}{\isacharparenleft}drule{\isacharunderscore}tac\ f\ {\isacharequal}\ Rep{\isacharunderscore}three\ \isakeyword{in}\ arg{\isacharunderscore}cong{\isacharparenright}\isanewline
44.36 -\ \isacommand{apply}{\isacharparenleft}simp\ add{\isacharcolon}Abs{\isacharunderscore}three{\isacharunderscore}inverse{\isacharparenright}\isanewline
44.37 -\isacommand{by}\ simp%
44.38 -\begin{isamarkuptext}%
44.39 -\noindent
44.40 -Analogous lemmas can be proved in the same way for arbitrary type definitions.
44.41 -
44.42 +In processing our \isacommand{typedef} declaration,
44.43 +Isabelle helpfully proves several lemmas.
44.44 +One, \isa{Abs{\isacharunderscore}three{\isacharunderscore}inject},
44.45 +expresses that \isa{Abs{\isacharunderscore}three} is injective on the representing subset:
44.46 +\begin{center}
44.47 +\isa{{\isasymlbrakk}x\ {\isasymin}\ three{\isacharsemicolon}\ y\ {\isasymin}\ three{\isasymrbrakk}\ {\isasymLongrightarrow}\ {\isacharparenleft}Abs{\isacharunderscore}three\ x\ {\isacharequal}\ Abs{\isacharunderscore}three\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}x\ {\isacharequal}\ y{\isacharparenright}}
44.48 +\end{center}
44.49 +Another, \isa{Rep{\isacharunderscore}three{\isacharunderscore}inject}, expresses that the representation
44.50 +function is also injective:
44.51 +\begin{center}
44.52 +\isa{{\isacharparenleft}Rep{\isacharunderscore}three\ x\ {\isacharequal}\ Rep{\isacharunderscore}three\ y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}x\ {\isacharequal}\ y{\isacharparenright}}
44.53 +\end{center}
44.54 Distinctness of \isa{A}, \isa{B} and \isa{C} follows immediately
44.55 -if we expand their definitions and rewrite with the above simplification rule:%
44.56 +if we expand their definitions and rewrite with the injectivity
44.57 +of \isa{Abs{\isacharunderscore}three}:%
44.58 \end{isamarkuptext}%
44.59 \isacommand{lemma}\ {\isachardoublequote}A\ {\isasymnoteq}\ B\ {\isasymand}\ B\ {\isasymnoteq}\ A\ {\isasymand}\ A\ {\isasymnoteq}\ C\ {\isasymand}\ C\ {\isasymnoteq}\ A\ {\isasymand}\ B\ {\isasymnoteq}\ C\ {\isasymand}\ C\ {\isasymnoteq}\ B{\isachardoublequote}\isanewline
44.60 -\isacommand{by}{\isacharparenleft}simp\ add{\isacharcolon}A{\isacharunderscore}def\ B{\isacharunderscore}def\ C{\isacharunderscore}def\ three{\isacharunderscore}def{\isacharparenright}%
44.61 +\isacommand{by}{\isacharparenleft}simp\ add{\isacharcolon}\ Abs{\isacharunderscore}three{\isacharunderscore}inject\ A{\isacharunderscore}def\ B{\isacharunderscore}def\ C{\isacharunderscore}def\ three{\isacharunderscore}def{\isacharparenright}%
44.62 \begin{isamarkuptext}%
44.63 \noindent
44.64 Of course we rely on the simplifier to solve goals like \isa{{\isadigit{0}}\ {\isasymnoteq}\ {\isadigit{1}}}.
44.65 @@ -220,7 +209,8 @@
44.66 \isacommand{apply}\ simp{\isacharunderscore}all\isanewline
44.67 \isacommand{done}%
44.68 \begin{isamarkuptext}%
44.69 -Now the case distinction lemma on type \isa{three} is easy to derive if you know how to:%
44.70 +Now the case distinction lemma on type \isa{three} is easy to derive if you
44.71 +know how:%
44.72 \end{isamarkuptext}%
44.73 \isacommand{lemma}\ three{\isacharunderscore}cases{\isacharcolon}\ {\isachardoublequote}{\isasymlbrakk}\ P\ A{\isacharsemicolon}\ P\ B{\isacharsemicolon}\ P\ C\ {\isasymrbrakk}\ {\isasymLongrightarrow}\ P\ x{\isachardoublequote}%
44.74 \begin{isamarkuptxt}%
45.1 --- a/doc-src/TutorialI/Types/numerics.tex Thu Aug 09 10:17:45 2001 +0200
45.2 +++ b/doc-src/TutorialI/Types/numerics.tex Thu Aug 09 18:12:15 2001 +0200
45.3 @@ -3,6 +3,7 @@
45.4 \section{Numbers}
45.5 \label{sec:numbers}
45.6
45.7 +\index{numbers|(}%
45.8 Until now, our numerical examples have used the type of \textbf{natural
45.9 numbers},
45.10 \isa{nat}. This is a recursive datatype generated by the constructors
45.11 @@ -45,7 +46,7 @@
45.12 They begin
45.13 with a number sign (\isa{\#}), have an optional minus sign (\isa{-}) and
45.14 then one or more decimal digits. Examples are \isa{\#0}, \isa{\#-3}
45.15 -and \isa{\#441223334678}.
45.16 +and \isa{\#441223334678}.\REMARK{will need updating?}
45.17
45.18 Literals look like constants, but they abbreviate
45.19 terms, representing the number in a two's complement binary notation.
45.20 @@ -110,7 +111,7 @@
45.21 in a variety of additive types. The symbols \sdx{1} and \sdx{2} are
45.22 not constants but abbreviations for \isa{Suc 0} and \isa{Suc(Suc 0)},
45.23 respectively. The literals \isa{\#0}, \isa{\#1} and \isa{\#2} are
45.24 -syntactically different from \isa{0}, \isa{1} and \isa{2}. You will
45.25 +syntactically different from \isa{0}, \isa{1} and \isa{2}. You will\REMARK{will need updating?}
45.26 sometimes prefer one notation to the other. Literals are obviously
45.27 necessary to express large values, while \isa{0} and \isa{Suc} are needed
45.28 in order to match many theorems, including the rewrite rules for primitive
45.29 @@ -224,7 +225,7 @@
45.30 d))
45.31 \rulename{nat_diff_split}
45.32 \end{isabelle}
45.33 -For example, it proves the following fact, which lies outside the scope of
45.34 +For example, this splitting proves the following fact, which lies outside the scope of
45.35 linear arithmetic:\REMARK{replace by new example!}
45.36 \begin{isabelle}
45.37 \isacommand{lemma}\ "(n-1)*(n+1)\ =\ n*n\ -\ 1"\isanewline
45.38 @@ -245,8 +246,10 @@
45.39 +\ z)
45.40 \rulename{add_left_commute}
45.41 \end{isabelle}
45.42 -The name \isa{add_ac} refers to the list of all three theorems, similarly
45.43 -there is \isa{mult_ac}. Here is an example of the sorting effect. Start
45.44 +The name \isa{add_ac}\index{*add_ac (theorems)}
45.45 +refers to the list of all three theorems; similarly
45.46 +there is \isa{mult_ac}.\index{*mult_ac (theorems)}
45.47 +Here is an example of the sorting effect. Start
45.48 with this goal:
45.49 \begin{isabelle}
45.50 \ 1.\ Suc\ (i\ +\ j\ *\ l\ *\ k\ +\ m\ *\ n)\ =\
45.51 @@ -297,9 +300,11 @@
45.52
45.53 Concerning simplifier tricks, we have no need to eliminate subtraction: it
45.54 is well-behaved. As with the natural numbers, the simplifier can sort the
45.55 -operands of sums and products. The name \isa{zadd_ac} refers to the
45.56 +operands of sums and products. The name \isa{zadd_ac}\index{*zadd_ac (theorems)}
45.57 +refers to the
45.58 associativity and commutativity theorems for integer addition, while
45.59 -\isa{zmult_ac} has the analogous theorems for multiplication. The
45.60 +\isa{zmult_ac}\index{*zmult_ac (theorems)}
45.61 +has the analogous theorems for multiplication. The
45.62 prefix~\isa{z} in many theorem names recalls the use of $\mathbb{Z}$ to
45.63 denote the set of integers.
45.64
45.65 @@ -450,4 +455,5 @@
45.66 about limits, differentiation and integration~\cite{fleuriot-jcm}. The
45.67 development defines an infinitely large number, \isa{omega} and an
45.68 infinitely small positive number, \isa{epsilon}. The
45.69 -relation $x\approx y$ means ``$x$ is infinitely close to~$y$''.
45.70 +relation $x\approx y$ means ``$x$ is infinitely close to~$y$''.%
45.71 +\index{numbers|)}
45.72 \ No newline at end of file
46.1 --- a/doc-src/TutorialI/Types/types.tex Thu Aug 09 10:17:45 2001 +0200
46.2 +++ b/doc-src/TutorialI/Types/types.tex Thu Aug 09 18:12:15 2001 +0200
46.3 @@ -18,7 +18,8 @@
46.4
46.5 The material in this section goes beyond the needs of most novices. Serious
46.6 users should at least skim the sections on basic types and on type classes.
46.7 -The latter is fairly advanced: read the beginning to understand what it is
46.8 +The latter material is fairly advanced; read the beginning to understand what
46.9 +it is
46.10 about, but consult the rest only when necessary.
46.11
46.12 \input{Types/numerics}
46.13 @@ -44,8 +45,9 @@
46.14 an axiomatic specification of a class of types. Thus we can talk about a type
46.15 $\tau$ being in a class $C$, which is written $\tau :: C$. This is the case if
46.16 $\tau$ satisfies the axioms of $C$. Furthermore, type classes can be
46.17 -organized in a hierarchy. Thus there is the notion of a class $D$ being a
46.18 -\textbf{subclass} of a class $C$, written $D < C$. This is the case if all
46.19 +organized in a hierarchy. Thus there is the notion of a class $D$ being a
46.20 +\textbf{subclass}\index{subclasses}
46.21 +of a class $C$, written $D < C$. This is the case if all
46.22 axioms of $C$ are also provable in $D$. We introduce these concepts
46.23 by means of a running example, ordering relations.
46.24
47.1 --- a/doc-src/TutorialI/tutorial.ind Thu Aug 09 10:17:45 2001 +0200
47.2 +++ b/doc-src/TutorialI/tutorial.ind Thu Aug 09 18:12:15 2001 +0200
47.3 @@ -49,6 +49,7 @@
47.4 \item \texttt {abs}, \bold{189}
47.5 \item absolute value, 135
47.6 \item \isa {add} (modifier), 27
47.7 + \item \isa {add_ac} (theorems), 134
47.8 \item \isa {add_assoc} (theorem), \bold{134}
47.9 \item \isa {add_commute} (theorem), \bold{134}
47.10 \item \isa {add_mult_distrib} (theorem), \bold{133}
47.11 @@ -133,13 +134,12 @@
47.12 \item converse
47.13 \subitem of a relation, \bold{96}
47.14 \item \isa {converse_iff} (theorem), \bold{96}
47.15 - \item CTL, 100--110
47.16
47.17 \indexspace
47.18
47.19 \item \isacommand {datatype} (command), 7, 36--41
47.20 \item datatypes, 15--20
47.21 - \subitem and nested recursion, 38
47.22 + \subitem and nested recursion, 38, 42
47.23 \subitem mutually recursive, 36
47.24 \item \isacommand {defer} (command), 14, 84
47.25 \item Definitional Approach, 24
47.26 @@ -204,7 +204,7 @@
47.27 \indexspace
47.28
47.29 \item \isa {False} (constant), 3
47.30 - \item \isa {fast} (method), 76
47.31 + \item \isa {fast} (method), 76, 108
47.32 \item Fibonacci function, 44
47.33 \item \isa {finite} (symbol), 93
47.34 \item \isa {Finites} (constant), 93
47.35 @@ -229,6 +229,8 @@
47.36 \item generalizing induction formulae, 32
47.37 \item Girard, Jean-Yves, \fnote{55}
47.38 \item Gordon, Mike, 1
47.39 + \item grammars
47.40 + \subitem defining inductively, 124--129
47.41 \item ground terms example, 119--124
47.42
47.43 \indexspace
47.44 @@ -237,6 +239,7 @@
47.45 \item higher-order pattern, \bold{159}
47.46 \item Hilbert's $\varepsilon$-operator, 70
47.47 \item HOLCF, 41
47.48 + \item Hopcroft, J. E., 129
47.49 \item \isa {hypreal} (type), 137
47.50
47.51 \indexspace
47.52 @@ -249,7 +252,7 @@
47.53 \item identity relation, \bold{96}
47.54 \item \isa {if} expressions, 3, 4
47.55 \subitem simplification of, 31
47.56 - \subitem splitting of, 29
47.57 + \subitem splitting of, 29, 47
47.58 \item if-and-only-if, 3
47.59 \item \isa {iff} (attribute), 74, 86, 114
47.60 \item \isa {iffD1} (theorem), \bold{78}
47.61 @@ -261,6 +264,7 @@
47.62 \item \isa {Image_iff} (theorem), \bold{96}
47.63 \item \isa {impI} (theorem), \bold{56}
47.64 \item implication, 56--57
47.65 + \item \isa {ind_cases} (method), 115
47.66 \item \isa {induct_tac} (method), 10, 17, 50, 172
47.67 \item induction, 168--175
47.68 \subitem recursion, 49--50
47.69 @@ -339,7 +343,7 @@
47.70 \item \isa {Main} (theory), 2
47.71 \item major premise, \bold{59}
47.72 \item \isa {max} (constant), 21, 22
47.73 - \item measure function, \bold{45}, \bold{98}
47.74 + \item measure functions, 45, 98
47.75 \item \isa {measure_def} (theorem), \bold{99}
47.76 \item meta-logic, \bold{64}
47.77 \item methods, \bold{14}
47.78 @@ -347,12 +351,14 @@
47.79 \item \isa {mod} (symbol), 21
47.80 \item \isa {mod_div_equality} (theorem), \bold{133}
47.81 \item \isa {mod_mult_distrib} (theorem), \bold{133}
47.82 + \item model checking example, 100--110
47.83 \item \emph{modus ponens}, 51, 56
47.84 \item \isa {mono_def} (theorem), \bold{100}
47.85 \item monotone functions, \bold{100}, 123
47.86 \subitem and inductive definitions, 121--122
47.87 \item \isa {more} (constant), 140--142
47.88 \item \isa {mp} (theorem), \bold{56}
47.89 + \item \isa {mult_ac} (theorems), 134
47.90 \item multiset ordering, \bold{99}
47.91
47.92 \indexspace
47.93 @@ -370,6 +376,7 @@
47.94 \item \isa {None} (constant), \bold{22}
47.95 \item \isa {notE} (theorem), \bold{57}
47.96 \item \isa {notI} (theorem), \bold{57}
47.97 + \item numbers, 131--137
47.98 \item numeric literals, 132
47.99 \subitem for type \protect\isa{nat}, 133
47.100 \subitem for type \protect\isa{real}, 136
47.101 @@ -393,8 +400,10 @@
47.102 \item pairs and tuples, 22, 137--140
47.103 \item parent theories, \bold{2}
47.104 \item partial function, 164
47.105 + \item pattern matching
47.106 + \subitem and \isacommand{recdef}, 45
47.107 \item pattern, higher-order, \bold{159}
47.108 - \item PDL, 102--105
47.109 + \item PDL, 102--104
47.110 \item permutative rewrite rule, \bold{158}
47.111 \item \isacommand {pr} (command), 14, 84
47.112 \item \isacommand {prefer} (command), 14, 84
47.113 @@ -445,6 +454,8 @@
47.114 \item recursion induction, 49--50
47.115 \item \isacommand {redo} (command), 14
47.116 \item reflexive and transitive closure, 96--98
47.117 + \item reflexive transitive closure
47.118 + \subitem defining inductively, 116--119
47.119 \item relations, 95--98
47.120 \subitem well-founded, 98--99
47.121 \item \isa {rename_tac} (method), 66--67
47.122 @@ -500,15 +511,15 @@
47.123 \item sorts, 150
47.124 \item \isa {spec} (theorem), \bold{64}
47.125 \item \isa {split} (attribute), 30
47.126 - \item \isa {split} (constant), \bold{137}
47.127 - \item \isa {split} (method), 29
47.128 - \item \isa {split} (method, attr.), 29--31
47.129 - \item \isa {split} (modified), 30
47.130 + \item \isa {split} (constant), 137
47.131 + \item \isa {split} (method), 29, 138
47.132 + \item \isa {split} (modifier), 30
47.133 \item split rule, \bold{30}
47.134 \item \isa {split_if} (theorem), 30
47.135 \item \isa {split_if_asm} (theorem), 30
47.136 \item \isa {ssubst} (theorem), \bold{61}
47.137 \item structural induction, \see{induction, structural}{1}
47.138 + \item subgoal numbering, 44
47.139 \item \isa {subgoal_tac} (method), 82
47.140 \item subgoals, 10
47.141 \item subset relation, \bold{90}
47.142 @@ -545,6 +556,7 @@
47.143 \item \isa {trace_simp} (flag), 31
47.144 \item tracing the simplifier, \bold{31}
47.145 \item \isa {trancl_trans} (theorem), \bold{97}
47.146 + \item transition systems, 101
47.147 \item \isacommand {translations} (command), 24
47.148 \item tries, 41--44
47.149 \item \isa {True} (constant), 3
47.150 @@ -555,7 +567,7 @@
47.151 \item type inference, \bold{3}
47.152 \item type synonyms, 23
47.153 \item type variables, 3
47.154 - \item \isacommand {typedecl} (command), 150
47.155 + \item \isacommand {typedecl} (command), 101, 150
47.156 \item \isacommand {typedef} (command), 151--155
47.157 \item types, 2--3
47.158 \subitem declaring, 150--151
47.159 @@ -564,6 +576,7 @@
47.160
47.161 \indexspace
47.162
47.163 + \item Ullman, J. D., 129
47.164 \item \texttt {UN}, \bold{189}
47.165 \item \texttt {Un}, \bold{189}
47.166 \item \isa {UN_E} (theorem), \bold{92}
47.167 @@ -593,7 +606,16 @@
47.168
47.169 \item Wenzel, Markus, v
47.170 \item \isa {wf_induct} (theorem), \bold{99}
47.171 + \item \isa {wf_inv_image} (theorem), \bold{99}
47.172 + \item \isa {wf_less_than} (theorem), \bold{98}
47.173 + \item \isa {wf_lex_prod} (theorem), \bold{99}
47.174 + \item \isa {wf_measure} (theorem), \bold{99}
47.175 \item \isa {while} (constant), 167
47.176 \item \isa {While_Combinator} (theory), 167
47.177
47.178 + \indexspace
47.179 +
47.180 + \item \isa {zadd_ac} (theorems), 135
47.181 + \item \isa {zmult_ac} (theorems), 135
47.182 +
47.183 \end{theindex}