*** empty log message ***
authornipkow
Tue, 01 May 2001 22:26:55 +0200
changeset 11277a2bff98d6e5d
parent 11276 f8353c722d4e
child 11278 9710486b886b
*** empty log message ***
doc-src/TutorialI/Advanced/Partial.thy
doc-src/TutorialI/Advanced/document/Partial.tex
doc-src/TutorialI/CTL/CTLind.thy
doc-src/TutorialI/CTL/document/CTLind.tex
doc-src/TutorialI/Misc/AdvancedInd.thy
doc-src/TutorialI/Misc/document/AdvancedInd.tex
doc-src/TutorialI/Recdef/Nested1.thy
doc-src/TutorialI/Recdef/Nested2.thy
doc-src/TutorialI/Recdef/document/Nested1.tex
doc-src/TutorialI/Recdef/document/Nested2.tex
doc-src/TutorialI/Types/Overloading1.thy
doc-src/TutorialI/Types/Overloading2.thy
doc-src/TutorialI/Types/Pairs.thy
doc-src/TutorialI/Types/Typedef.thy
doc-src/TutorialI/Types/document/Overloading1.tex
doc-src/TutorialI/Types/document/Overloading2.tex
doc-src/TutorialI/Types/document/Pairs.tex
doc-src/TutorialI/Types/document/Typedef.tex
doc-src/TutorialI/Types/types.tex
     1.1 --- a/doc-src/TutorialI/Advanced/Partial.thy	Tue May 01 17:16:32 2001 +0200
     1.2 +++ b/doc-src/TutorialI/Advanced/Partial.thy	Tue May 01 22:26:55 2001 +0200
     1.3 @@ -1,14 +1,24 @@
     1.4  (*<*)theory Partial = While_Combinator:(*>*)
     1.5  
     1.6 -text{*\noindent
     1.7 -Throughout the tutorial we have emphasized the fact that all functions
     1.8 -in HOL are total. Hence we cannot hope to define truly partial
     1.9 -functions. The best we can do are functions that are
    1.10 -\emph{underdefined}\index{underdefined function}:
    1.11 -for certain arguments we only know that a result
    1.12 -exists, but we do not know what it is. When defining functions that are
    1.13 -normally considered partial, underdefinedness turns out to be a very
    1.14 -reasonable alternative.
    1.15 +text{*\noindent Throughout the tutorial we have emphasized the fact
    1.16 +that all functions in HOL are total. Hence we cannot hope to define
    1.17 +truly partial functions but must totalize them. A straightforward
    1.18 +totalization is to lift the result type of the function from $\tau$ to
    1.19 +$\tau$~@{text option} (see \ref{sec:option}), where @{term None} is
    1.20 +returned if the function is applied to an argument not in its
    1.21 +domain. Function @{term assoc} in \S\ref{sec:Trie} is a simple example.
    1.22 +We do not pursue this schema further because it should be clear
    1.23 +how it works. Its main drawback is that the result of such a lifted
    1.24 +function has to be unpacked first before it can be processed
    1.25 +further. Its main advantage is that you can distinguish if the
    1.26 +function was applied to an argument in its domain or not. If you do
    1.27 +not need to make this distinction, for example because the function is
    1.28 +never used outside its domain, it is easier to work with
    1.29 +\emph{underdefined}\index{underdefined function} functions: for
    1.30 +certain arguments we only know that a result exists, but we do not
    1.31 +know what it is. When defining functions that are normally considered
    1.32 +partial, underdefinedness turns out to be a very reasonable
    1.33 +alternative.
    1.34  
    1.35  We have already seen an instance of underdefinedness by means of
    1.36  non-exhaustive pattern matching: the definition of @{term last} in
    1.37 @@ -61,7 +71,7 @@
    1.38  standard mathematical division function.
    1.39  
    1.40  As a more substantial example we consider the problem of searching a graph.
    1.41 -For simplicity our graph is given by a function (@{term f}) of
    1.42 +For simplicity our graph is given by a function @{term f} of
    1.43  type @{typ"'a \<Rightarrow> 'a"} which
    1.44  maps each node to its successor, i.e.\ the graph is really a tree.
    1.45  The task is to find the end of a chain, modelled by a node pointing to
    1.46 @@ -90,9 +100,7 @@
    1.47  text{*\noindent
    1.48  The recursion equation itself should be clear enough: it is our aborted
    1.49  first attempt augmented with a check that there are no non-trivial loops.
    1.50 -
    1.51 -What complicates the termination proof is that the argument of @{term find}
    1.52 -is a pair. To express the required well-founded relation we employ the
    1.53 +To express the required well-founded relation we employ the
    1.54  predefined combinator @{term same_fst} of type
    1.55  @{text[display]"('a \<Rightarrow> bool) \<Rightarrow> ('a \<Rightarrow> ('b\<times>'b)set) \<Rightarrow> (('a\<times>'b) \<times> ('a\<times>'b))set"}
    1.56  defined as
    1.57 @@ -177,8 +185,8 @@
    1.58  by @{text auto} but falls to @{text simp}:
    1.59  *}
    1.60  
    1.61 -lemma lem: "\<lbrakk> wf(step1 f); x' = f x \<rbrakk> \<Longrightarrow> 
    1.62 -   \<exists>y. while (\<lambda>(x,x'). x' \<noteq> x) (\<lambda>(x,x'). (x',f x')) (x,x') = (y,y) \<and>
    1.63 +lemma lem: "wf(step1 f) \<Longrightarrow>
    1.64 +  \<exists>y. while (\<lambda>(x,x'). x' \<noteq> x) (\<lambda>(x,x'). (x',f x')) (x,f x) = (y,y) \<and>
    1.65         f y = y"
    1.66  apply(rule_tac P = "\<lambda>(x,x'). x' = f x" and
    1.67                 r = "inv_image (step1 f) fst" in while_rule);
     2.1 --- a/doc-src/TutorialI/Advanced/document/Partial.tex	Tue May 01 17:16:32 2001 +0200
     2.2 +++ b/doc-src/TutorialI/Advanced/document/Partial.tex	Tue May 01 22:26:55 2001 +0200
     2.3 @@ -3,15 +3,25 @@
     2.4  \def\isabellecontext{Partial}%
     2.5  %
     2.6  \begin{isamarkuptext}%
     2.7 -\noindent
     2.8 -Throughout the tutorial we have emphasized the fact that all functions
     2.9 -in HOL are total. Hence we cannot hope to define truly partial
    2.10 -functions. The best we can do are functions that are
    2.11 -\emph{underdefined}\index{underdefined function}:
    2.12 -for certain arguments we only know that a result
    2.13 -exists, but we do not know what it is. When defining functions that are
    2.14 -normally considered partial, underdefinedness turns out to be a very
    2.15 -reasonable alternative.
    2.16 +\noindent Throughout the tutorial we have emphasized the fact
    2.17 +that all functions in HOL are total. Hence we cannot hope to define
    2.18 +truly partial functions but must totalize them. A straightforward
    2.19 +totalization is to lift the result type of the function from $\tau$ to
    2.20 +$\tau$~\isa{option} (see \ref{sec:option}), where \isa{None} is
    2.21 +returned if the function is applied to an argument not in its
    2.22 +domain. Function \isa{assoc} in \S\ref{sec:Trie} is a simple example.
    2.23 +We do not pursue this schema further because it should be clear
    2.24 +how it works. Its main drawback is that the result of such a lifted
    2.25 +function has to be unpacked first before it can be processed
    2.26 +further. Its main advantage is that you can distinguish if the
    2.27 +function was applied to an argument in its domain or not. If you do
    2.28 +not need to make this distinction, for example because the function is
    2.29 +never used outside its domain, it is easier to work with
    2.30 +\emph{underdefined}\index{underdefined function} functions: for
    2.31 +certain arguments we only know that a result exists, but we do not
    2.32 +know what it is. When defining functions that are normally considered
    2.33 +partial, underdefinedness turns out to be a very reasonable
    2.34 +alternative.
    2.35  
    2.36  We have already seen an instance of underdefinedness by means of
    2.37  non-exhaustive pattern matching: the definition of \isa{last} in
    2.38 @@ -64,7 +74,7 @@
    2.39  standard mathematical division function.
    2.40  
    2.41  As a more substantial example we consider the problem of searching a graph.
    2.42 -For simplicity our graph is given by a function (\isa{f}) of
    2.43 +For simplicity our graph is given by a function \isa{f} of
    2.44  type \isa{{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}a} which
    2.45  maps each node to its successor, i.e.\ the graph is really a tree.
    2.46  The task is to find the end of a chain, modelled by a node pointing to
    2.47 @@ -93,9 +103,7 @@
    2.48  \noindent
    2.49  The recursion equation itself should be clear enough: it is our aborted
    2.50  first attempt augmented with a check that there are no non-trivial loops.
    2.51 -
    2.52 -What complicates the termination proof is that the argument of \isa{find}
    2.53 -is a pair. To express the required well-founded relation we employ the
    2.54 +To express the required well-founded relation we employ the
    2.55  predefined combinator \isa{same{\isacharunderscore}fst} of type
    2.56  \begin{isabelle}%
    2.57  \ \ \ \ \ {\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ bool{\isacharparenright}\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}b{\isasymtimes}{\isacharprime}b{\isacharparenright}set{\isacharparenright}\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharparenleft}{\isacharprime}a{\isasymtimes}{\isacharprime}b{\isacharparenright}\ {\isasymtimes}\ {\isacharparenleft}{\isacharprime}a{\isasymtimes}{\isacharprime}b{\isacharparenright}{\isacharparenright}set%
    2.58 @@ -188,8 +196,8 @@
    2.59  Only the final premise of \isa{while{\isacharunderscore}rule} is left unproved
    2.60  by \isa{auto} but falls to \isa{simp}:%
    2.61  \end{isamarkuptext}%
    2.62 -\isacommand{lemma}\ lem{\isacharcolon}\ {\isachardoublequote}{\isasymlbrakk}\ wf{\isacharparenleft}step{\isadigit{1}}\ f{\isacharparenright}{\isacharsemicolon}\ x{\isacharprime}\ {\isacharequal}\ f\ x\ {\isasymrbrakk}\ {\isasymLongrightarrow}\ \isanewline
    2.63 -\ \ \ {\isasymexists}y{\isachardot}\ while\ {\isacharparenleft}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}{\isachardot}\ x{\isacharprime}\ {\isasymnoteq}\ x{\isacharparenright}\ {\isacharparenleft}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}{\isachardot}\ {\isacharparenleft}x{\isacharprime}{\isacharcomma}f\ x{\isacharprime}{\isacharparenright}{\isacharparenright}\ {\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}y{\isacharcomma}y{\isacharparenright}\ {\isasymand}\isanewline
    2.64 +\isacommand{lemma}\ lem{\isacharcolon}\ {\isachardoublequote}wf{\isacharparenleft}step{\isadigit{1}}\ f{\isacharparenright}\ {\isasymLongrightarrow}\isanewline
    2.65 +\ \ {\isasymexists}y{\isachardot}\ while\ {\isacharparenleft}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}{\isachardot}\ x{\isacharprime}\ {\isasymnoteq}\ x{\isacharparenright}\ {\isacharparenleft}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}{\isachardot}\ {\isacharparenleft}x{\isacharprime}{\isacharcomma}f\ x{\isacharprime}{\isacharparenright}{\isacharparenright}\ {\isacharparenleft}x{\isacharcomma}f\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}y{\isacharcomma}y{\isacharparenright}\ {\isasymand}\isanewline
    2.66  \ \ \ \ \ \ \ f\ y\ {\isacharequal}\ y{\isachardoublequote}\isanewline
    2.67  \isacommand{apply}{\isacharparenleft}rule{\isacharunderscore}tac\ P\ {\isacharequal}\ {\isachardoublequote}{\isasymlambda}{\isacharparenleft}x{\isacharcomma}x{\isacharprime}{\isacharparenright}{\isachardot}\ x{\isacharprime}\ {\isacharequal}\ f\ x{\isachardoublequote}\ \isakeyword{and}\isanewline
    2.68  \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ r\ {\isacharequal}\ {\isachardoublequote}inv{\isacharunderscore}image\ {\isacharparenleft}step{\isadigit{1}}\ f{\isacharparenright}\ fst{\isachardoublequote}\ \isakeyword{in}\ while{\isacharunderscore}rule{\isacharparenright}\isanewline
     3.1 --- a/doc-src/TutorialI/CTL/CTLind.thy	Tue May 01 17:16:32 2001 +0200
     3.2 +++ b/doc-src/TutorialI/CTL/CTLind.thy	Tue May 01 22:26:55 2001 +0200
     3.3 @@ -59,9 +59,9 @@
     3.4  and that the instantiated induction hypothesis implies the conclusion.
     3.5  
     3.6  Now we come to the key lemma. Assuming that no infinite @{term A}-avoiding
     3.7 -path starts from @{term s}, we want to show @{prop"s \<in> lfp(af A)"}. This
     3.8 -can be generalized by proving that every point @{term t} ``between''
     3.9 -@{term s} and @{term A}, i.e.\ all of @{term"Avoid s A"},
    3.10 +path starts from @{term s}, we want to show @{prop"s \<in> lfp(af A)"}. For the
    3.11 +inductive proof this must be generalized to the statement that every point @{term t}
    3.12 +``between'' @{term s} and @{term A}, i.e.\ all of @{term"Avoid s A"},
    3.13  is contained in @{term"lfp(af A)"}:
    3.14  *}
    3.15  
    3.16 @@ -79,7 +79,7 @@
    3.17  The formal counterpart of this proof sketch is a well-founded induction
    3.18  w.r.t.\ @{term M} restricted to (roughly speaking) @{term"Avoid s A - A"}:
    3.19  @{term[display]"{(y,x). (x,y) \<in> M \<and> x \<in> Avoid s A \<and> x \<notin> A}"}
    3.20 -As we shall see in a moment, the absence of infinite @{term A}-avoiding paths
    3.21 +As we shall see presently, the absence of infinite @{term A}-avoiding paths
    3.22  starting from @{term s} implies well-foundedness of this relation. For the
    3.23  moment we assume this and proceed with the induction:
    3.24  *}
     4.1 --- a/doc-src/TutorialI/CTL/document/CTLind.tex	Tue May 01 17:16:32 2001 +0200
     4.2 +++ b/doc-src/TutorialI/CTL/document/CTLind.tex	Tue May 01 22:26:55 2001 +0200
     4.3 @@ -59,9 +59,9 @@
     4.4  and that the instantiated induction hypothesis implies the conclusion.
     4.5  
     4.6  Now we come to the key lemma. Assuming that no infinite \isa{A}-avoiding
     4.7 -path starts from \isa{s}, we want to show \isa{s\ {\isasymin}\ lfp\ {\isacharparenleft}af\ A{\isacharparenright}}. This
     4.8 -can be generalized by proving that every point \isa{t} ``between''
     4.9 -\isa{s} and \isa{A}, i.e.\ all of \isa{Avoid\ s\ A},
    4.10 +path starts from \isa{s}, we want to show \isa{s\ {\isasymin}\ lfp\ {\isacharparenleft}af\ A{\isacharparenright}}. For the
    4.11 +inductive proof this must be generalized to the statement that every point \isa{t}
    4.12 +``between'' \isa{s} and \isa{A}, i.e.\ all of \isa{Avoid\ s\ A},
    4.13  is contained in \isa{lfp\ {\isacharparenleft}af\ A{\isacharparenright}}:%
    4.14  \end{isamarkuptext}%
    4.15  \isacommand{lemma}\ Avoid{\isacharunderscore}in{\isacharunderscore}lfp{\isacharbrackleft}rule{\isacharunderscore}format{\isacharparenleft}no{\isacharunderscore}asm{\isacharparenright}{\isacharbrackright}{\isacharcolon}\isanewline
    4.16 @@ -79,7 +79,7 @@
    4.17  \begin{isabelle}%
    4.18  \ \ \ \ \ {\isacharbraceleft}{\isacharparenleft}y{\isacharcomma}\ x{\isacharparenright}{\isachardot}\ {\isacharparenleft}x{\isacharcomma}\ y{\isacharparenright}\ {\isasymin}\ M\ {\isasymand}\ x\ {\isasymin}\ Avoid\ s\ A\ {\isasymand}\ x\ {\isasymnotin}\ A{\isacharbraceright}%
    4.19  \end{isabelle}
    4.20 -As we shall see in a moment, the absence of infinite \isa{A}-avoiding paths
    4.21 +As we shall see presently, the absence of infinite \isa{A}-avoiding paths
    4.22  starting from \isa{s} implies well-foundedness of this relation. For the
    4.23  moment we assume this and proceed with the induction:%
    4.24  \end{isamarkuptxt}%
     5.1 --- a/doc-src/TutorialI/Misc/AdvancedInd.thy	Tue May 01 17:16:32 2001 +0200
     5.2 +++ b/doc-src/TutorialI/Misc/AdvancedInd.thy	Tue May 01 22:26:55 2001 +0200
     5.3 @@ -86,7 +86,9 @@
     5.4  you want to induct on a whole term, rather than an individual variable. In
     5.5  general, when inducting on some term $t$ you must rephrase the conclusion $C$
     5.6  as
     5.7 -\[ \forall y@1 \dots y@n.~ x = t \longrightarrow C \]
     5.8 +\begin{equation}\label{eqn:ind-over-term}
     5.9 +\forall y@1 \dots y@n.~ x = t \longrightarrow C
    5.10 +\end{equation}
    5.11  where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is new, and
    5.12  perform induction on $x$ afterwards. An example appears in
    5.13  \S\ref{sec:complete-ind} below.
    5.14 @@ -101,7 +103,20 @@
    5.15  
    5.16  Of course, all premises that share free variables with $t$ need to be pulled into
    5.17  the conclusion as well, under the @{text"\<forall>"}, again using @{text"\<longrightarrow>"} as shown above.
    5.18 -*};
    5.19 +
    5.20 +Readers who are puzzled by the form of statement
    5.21 +(\ref{eqn:ind-over-term}) above should remember that the
    5.22 +transformation is only performed to permit induction. Once induction
    5.23 +has been applied, the statement can be transformed back into something quite
    5.24 +intuitive. For example, applying wellfounded induction on $x$ (w.r.t.\
    5.25 +$\prec$) to (\ref{eqn:ind-over-term}) and transforming the result a
    5.26 +little leads to the goal
    5.27 +\[ \bigwedge\vec{y}.~\forall \vec{z}.~ t(\vec{z}) \prec t(\vec{y}) \longrightarrow C(\vec{z})
    5.28 +   \Longrightarrow C(\vec{y}) \]
    5.29 +where the dependence of $t$ and $C$ on their free variables has been made explicit.
    5.30 +Unfortunately, this induction schema cannot be expressed as a single theorem because it depends
    5.31 +on the number of free variables in $t$ --- the notation $\vec{y}$ is merely an informal device.
    5.32 +*}
    5.33  
    5.34  subsection{*Beyond Structural and Recursion Induction*};
    5.35  
     6.1 --- a/doc-src/TutorialI/Misc/document/AdvancedInd.tex	Tue May 01 17:16:32 2001 +0200
     6.2 +++ b/doc-src/TutorialI/Misc/document/AdvancedInd.tex	Tue May 01 22:26:55 2001 +0200
     6.3 @@ -79,7 +79,9 @@
     6.4  you want to induct on a whole term, rather than an individual variable. In
     6.5  general, when inducting on some term $t$ you must rephrase the conclusion $C$
     6.6  as
     6.7 -\[ \forall y@1 \dots y@n.~ x = t \longrightarrow C \]
     6.8 +\begin{equation}\label{eqn:ind-over-term}
     6.9 +\forall y@1 \dots y@n.~ x = t \longrightarrow C
    6.10 +\end{equation}
    6.11  where $y@1 \dots y@n$ are the free variables in $t$ and $x$ is new, and
    6.12  perform induction on $x$ afterwards. An example appears in
    6.13  \S\ref{sec:complete-ind} below.
    6.14 @@ -93,7 +95,20 @@
    6.15  For an example see \S\ref{sec:CTL-revisited} below.
    6.16  
    6.17  Of course, all premises that share free variables with $t$ need to be pulled into
    6.18 -the conclusion as well, under the \isa{{\isasymforall}}, again using \isa{{\isasymlongrightarrow}} as shown above.%
    6.19 +the conclusion as well, under the \isa{{\isasymforall}}, again using \isa{{\isasymlongrightarrow}} as shown above.
    6.20 +
    6.21 +Readers who are puzzled by the form of statement
    6.22 +(\ref{eqn:ind-over-term}) above should remember that the
    6.23 +transformation is only performed to permit induction. Once induction
    6.24 +has been applied, the statement can be transformed back into something quite
    6.25 +intuitive. For example, applying wellfounded induction on $x$ (w.r.t.\
    6.26 +$\prec$) to (\ref{eqn:ind-over-term}) and transforming the result a
    6.27 +little leads to the goal
    6.28 +\[ \bigwedge\vec{y}.~\forall \vec{z}.~ t(\vec{z}) \prec t(\vec{y}) \longrightarrow C(\vec{z})
    6.29 +   \Longrightarrow C(\vec{y}) \]
    6.30 +where the dependence of $t$ and $C$ on their free variables has been made explicit.
    6.31 +Unfortunately, this induction schema cannot be expressed as a single theorem because it depends
    6.32 +on the number of free variables in $t$ --- the notation $\vec{y}$ is merely an informal device.%
    6.33  \end{isamarkuptext}%
    6.34  %
    6.35  \isamarkupsubsection{Beyond Structural and Recursion Induction%
     7.1 --- a/doc-src/TutorialI/Recdef/Nested1.thy	Tue May 01 17:16:32 2001 +0200
     7.2 +++ b/doc-src/TutorialI/Recdef/Nested1.thy	Tue May 01 22:26:55 2001 +0200
     7.3 @@ -3,7 +3,7 @@
     7.4  (*>*)
     7.5  
     7.6  text{*\noindent
     7.7 -Although the definition of @{term trev} is quite natural, we will have
     7.8 +Although the definition of @{term trev} below is quite natural, we will have
     7.9  to overcome a minor difficulty in convincing Isabelle of its termination.
    7.10  It is precisely this difficulty that is the \textit{raison d'\^etre} of
    7.11  this subsection.
     8.1 --- a/doc-src/TutorialI/Recdef/Nested2.thy	Tue May 01 17:16:32 2001 +0200
     8.2 +++ b/doc-src/TutorialI/Recdef/Nested2.thy	Tue May 01 22:26:55 2001 +0200
     8.3 @@ -2,7 +2,7 @@
     8.4  theory Nested2 = Nested0:
     8.5  (*>*)
     8.6  
     8.7 -text{*The termintion condition is easily proved by induction:*}
     8.8 +text{*The termination condition is easily proved by induction:*}
     8.9  
    8.10  lemma [simp]: "t \<in> set ts \<longrightarrow> size t < Suc(term_list_size ts)"
    8.11  by(induct_tac ts, auto)
    8.12 @@ -61,12 +61,12 @@
    8.13  \isacommand{recdef} has been supplied with the congruence theorem
    8.14  @{thm[source]map_cong}:
    8.15  @{thm[display,margin=50]"map_cong"[no_vars]}
    8.16 -Its second premise expresses (indirectly) that the second argument of
    8.17 -@{term"map"} is only applied to elements of its third argument. Congruence
    8.18 +Its second premise expresses (indirectly) that the first argument of
    8.19 +@{term"map"} is only applied to elements of its second argument. Congruence
    8.20  rules for other higher-order functions on lists look very similar. If you get
    8.21  into a situation where you need to supply \isacommand{recdef} with new
    8.22 -congruence rules, you can either append a hint locally
    8.23 -to the specific occurrence of \isacommand{recdef}
    8.24 +congruence rules, you can either append a hint after the end of
    8.25 +the recursion equations
    8.26  *}
    8.27  (*<*)
    8.28  consts dummy :: "nat => nat"
     9.1 --- a/doc-src/TutorialI/Recdef/document/Nested1.tex	Tue May 01 17:16:32 2001 +0200
     9.2 +++ b/doc-src/TutorialI/Recdef/document/Nested1.tex	Tue May 01 22:26:55 2001 +0200
     9.3 @@ -4,7 +4,7 @@
     9.4  %
     9.5  \begin{isamarkuptext}%
     9.6  \noindent
     9.7 -Although the definition of \isa{trev} is quite natural, we will have
     9.8 +Although the definition of \isa{trev} below is quite natural, we will have
     9.9  to overcome a minor difficulty in convincing Isabelle of its termination.
    9.10  It is precisely this difficulty that is the \textit{raison d'\^etre} of
    9.11  this subsection.
    10.1 --- a/doc-src/TutorialI/Recdef/document/Nested2.tex	Tue May 01 17:16:32 2001 +0200
    10.2 +++ b/doc-src/TutorialI/Recdef/document/Nested2.tex	Tue May 01 22:26:55 2001 +0200
    10.3 @@ -3,7 +3,7 @@
    10.4  \def\isabellecontext{Nested{\isadigit{2}}}%
    10.5  %
    10.6  \begin{isamarkuptext}%
    10.7 -The termintion condition is easily proved by induction:%
    10.8 +The termination condition is easily proved by induction:%
    10.9  \end{isamarkuptext}%
   10.10  \isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}t\ {\isasymin}\ set\ ts\ {\isasymlongrightarrow}\ size\ t\ {\isacharless}\ Suc{\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}{\isachardoublequote}\isanewline
   10.11  \isacommand{by}{\isacharparenleft}induct{\isacharunderscore}tac\ ts{\isacharcomma}\ auto{\isacharparenright}%
   10.12 @@ -63,12 +63,12 @@
   10.13  \ \ \ \ \ {\isasymlbrakk}xs\ {\isacharequal}\ ys{\isacharsemicolon}\ {\isasymAnd}x{\isachardot}\ x\ {\isasymin}\ set\ ys\ {\isasymLongrightarrow}\ f\ x\ {\isacharequal}\ g\ x{\isasymrbrakk}\isanewline
   10.14  \isaindent{\ \ \ \ \ }{\isasymLongrightarrow}\ map\ f\ xs\ {\isacharequal}\ map\ g\ ys%
   10.15  \end{isabelle}
   10.16 -Its second premise expresses (indirectly) that the second argument of
   10.17 -\isa{map} is only applied to elements of its third argument. Congruence
   10.18 +Its second premise expresses (indirectly) that the first argument of
   10.19 +\isa{map} is only applied to elements of its second argument. Congruence
   10.20  rules for other higher-order functions on lists look very similar. If you get
   10.21  into a situation where you need to supply \isacommand{recdef} with new
   10.22 -congruence rules, you can either append a hint locally
   10.23 -to the specific occurrence of \isacommand{recdef}%
   10.24 +congruence rules, you can either append a hint after the end of
   10.25 +the recursion equations%
   10.26  \end{isamarkuptext}%
   10.27  {\isacharparenleft}\isakeyword{hints}\ recdef{\isacharunderscore}cong{\isacharcolon}\ map{\isacharunderscore}cong{\isacharparenright}%
   10.28  \begin{isamarkuptext}%
    11.1 --- a/doc-src/TutorialI/Types/Overloading1.thy	Tue May 01 17:16:32 2001 +0200
    11.2 +++ b/doc-src/TutorialI/Types/Overloading1.thy	Tue May 01 22:26:55 2001 +0200
    11.3 @@ -4,8 +4,8 @@
    11.4  
    11.5  text{*
    11.6  We now start with the theory of ordering relations, which we want to phrase
    11.7 -in terms of the two binary symbols @{text"<<"} and @{text"<<="}: they have
    11.8 -been chosen to avoid clashes with @{text"<"} and @{text"\<le>"} in theory @{text
    11.9 +in terms of the two binary symbols @{text"<<"} and @{text"<<="}
   11.10 +to avoid clashes with @{text"<"} and @{text"\<le>"} in theory @{text
   11.11  Main}. To restrict the application of @{text"<<"} and @{text"<<="} we
   11.12  introduce the class @{text ordrel}:
   11.13  *}
   11.14 @@ -15,7 +15,7 @@
   11.15  text{*\noindent
   11.16  This introduces a new class @{text ordrel} and makes it a subclass of
   11.17  the predefined class @{text term}\footnote{The quotes around @{text term}
   11.18 -simply avoid the clash with the command \isacommand{term}.}; @{text term}
   11.19 +simply avoid the clash with the command \isacommand{term}.} --- @{text term}
   11.20  is the class of all HOL types, like ``Object'' in Java.
   11.21  This is a degenerate form of axiomatic type class without any axioms.
   11.22  Its sole purpose is to restrict the use of overloaded constants to meaningful
    12.1 --- a/doc-src/TutorialI/Types/Overloading2.thy	Tue May 01 17:16:32 2001 +0200
    12.2 +++ b/doc-src/TutorialI/Types/Overloading2.thy	Tue May 01 22:26:55 2001 +0200
    12.3 @@ -34,6 +34,6 @@
    12.4  \end{tabular}
    12.5  \end{center}
    12.6  And analogously for @{text"<"} instead of @{text"\<le>"}.
    12.7 -The form on the left is translated into the one on the right upon input but it is not
    12.8 -translated back upon output.
    12.9 +The form on the left is translated into the one on the right upon input.
   12.10 +For technical reasons, it is not translated back upon output.
   12.11  *}(*<*)end(*>*)
    13.1 --- a/doc-src/TutorialI/Types/Pairs.thy	Tue May 01 17:16:32 2001 +0200
    13.2 +++ b/doc-src/TutorialI/Types/Pairs.thy	Tue May 01 22:26:55 2001 +0200
    13.3 @@ -109,9 +109,9 @@
    13.4  txt{*
    13.5  @{subgoals[display,indent=0]}
    13.6  Again, simplification produces a term suitable for @{thm[source]split_split}
    13.7 -as above. If you are worried about the funny form of the premise:
    13.8 -@{term"split (op =)"} is the same as @{text"\<lambda>(x,y). x=y"}.
    13.9 -The same procedure works for
   13.10 +as above. If you are worried about the strange form of the premise:
   13.11 +@{term"split (op =)"} is short for @{text"\<lambda>(x,y). x=y"}.
   13.12 +The same proof procedure works for
   13.13  *}
   13.14  
   13.15  (*<*)by(simp split:split_split)(*>*)
    14.1 --- a/doc-src/TutorialI/Types/Typedef.thy	Tue May 01 17:16:32 2001 +0200
    14.2 +++ b/doc-src/TutorialI/Types/Typedef.thy	Tue May 01 22:26:55 2001 +0200
    14.3 @@ -26,10 +26,9 @@
    14.4  This does not define @{typ my_new_type} at all but merely introduces its
    14.5  name. Thus we know nothing about this type, except that it is
    14.6  non-empty. Such declarations without definitions are
    14.7 -useful only if that type can be viewed as a parameter of a theory and we do
    14.8 -not intend to impose any restrictions on it. A typical example is given in
    14.9 -\S\ref{sec:VMC}, where we define transition relations over an arbitrary type
   14.10 -of states without any internal structure.
   14.11 +useful if that type can be viewed as a parameter of the theory.
   14.12 +A typical example is given in \S\ref{sec:VMC}, where we define a transition
   14.13 +relation over an arbitrary type of states.
   14.14  
   14.15  In principle we can always get rid of such type declarations by making those
   14.16  types parameters of every other type, thus keeping the theory generic. In
    15.1 --- a/doc-src/TutorialI/Types/document/Overloading1.tex	Tue May 01 17:16:32 2001 +0200
    15.2 +++ b/doc-src/TutorialI/Types/document/Overloading1.tex	Tue May 01 22:26:55 2001 +0200
    15.3 @@ -7,8 +7,8 @@
    15.4  %
    15.5  \begin{isamarkuptext}%
    15.6  We now start with the theory of ordering relations, which we want to phrase
    15.7 -in terms of the two binary symbols \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}}: they have
    15.8 -been chosen to avoid clashes with \isa{{\isacharless}} and \isa{{\isasymle}} in theory \isa{Main}. To restrict the application of \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} we
    15.9 +in terms of the two binary symbols \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}}
   15.10 +to avoid clashes with \isa{{\isacharless}} and \isa{{\isasymle}} in theory \isa{Main}. To restrict the application of \isa{{\isacharless}{\isacharless}} and \isa{{\isacharless}{\isacharless}{\isacharequal}} we
   15.11  introduce the class \isa{ordrel}:%
   15.12  \end{isamarkuptext}%
   15.13  \isacommand{axclass}\ ordrel\ {\isacharless}\ {\isachardoublequote}term{\isachardoublequote}%
   15.14 @@ -16,7 +16,7 @@
   15.15  \noindent
   15.16  This introduces a new class \isa{ordrel} and makes it a subclass of
   15.17  the predefined class \isa{term}\footnote{The quotes around \isa{term}
   15.18 -simply avoid the clash with the command \isacommand{term}.}; \isa{term}
   15.19 +simply avoid the clash with the command \isacommand{term}.} --- \isa{term}
   15.20  is the class of all HOL types, like ``Object'' in Java.
   15.21  This is a degenerate form of axiomatic type class without any axioms.
   15.22  Its sole purpose is to restrict the use of overloaded constants to meaningful
    16.1 --- a/doc-src/TutorialI/Types/document/Overloading2.tex	Tue May 01 17:16:32 2001 +0200
    16.2 +++ b/doc-src/TutorialI/Types/document/Overloading2.tex	Tue May 01 22:26:55 2001 +0200
    16.3 @@ -36,8 +36,8 @@
    16.4  \end{tabular}
    16.5  \end{center}
    16.6  And analogously for \isa{{\isacharless}} instead of \isa{{\isasymle}}.
    16.7 -The form on the left is translated into the one on the right upon input but it is not
    16.8 -translated back upon output.%
    16.9 +The form on the left is translated into the one on the right upon input.
   16.10 +For technical reasons, it is not translated back upon output.%
   16.11  \end{isamarkuptext}%
   16.12  \end{isabellebody}%
   16.13  %%% Local Variables:
    17.1 --- a/doc-src/TutorialI/Types/document/Pairs.tex	Tue May 01 17:16:32 2001 +0200
    17.2 +++ b/doc-src/TutorialI/Types/document/Pairs.tex	Tue May 01 22:26:55 2001 +0200
    17.3 @@ -107,9 +107,9 @@
    17.4  \ {\isadigit{1}}{\isachardot}\ split\ op\ {\isacharequal}\ p\ {\isasymlongrightarrow}\ fst\ p\ {\isacharequal}\ snd\ p%
    17.5  \end{isabelle}
    17.6  Again, simplification produces a term suitable for \isa{split{\isacharunderscore}split}
    17.7 -as above. If you are worried about the funny form of the premise:
    17.8 -\isa{split\ op\ {\isacharequal}} is the same as \isa{{\isasymlambda}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}\ x{\isacharequal}y}.
    17.9 -The same procedure works for%
   17.10 +as above. If you are worried about the strange form of the premise:
   17.11 +\isa{split\ op\ {\isacharequal}} is short for \isa{{\isasymlambda}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}\ x{\isacharequal}y}.
   17.12 +The same proof procedure works for%
   17.13  \end{isamarkuptxt}%
   17.14  \isacommand{lemma}\ {\isachardoublequote}p\ {\isasymin}\ {\isacharbraceleft}{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}{\isachardot}\ x{\isacharequal}y{\isacharbraceright}\ {\isasymLongrightarrow}\ fst\ p\ {\isacharequal}\ snd\ p{\isachardoublequote}%
   17.15  \begin{isamarkuptxt}%
    18.1 --- a/doc-src/TutorialI/Types/document/Typedef.tex	Tue May 01 17:16:32 2001 +0200
    18.2 +++ b/doc-src/TutorialI/Types/document/Typedef.tex	Tue May 01 22:26:55 2001 +0200
    18.3 @@ -32,10 +32,9 @@
    18.4  This does not define \isa{my{\isacharunderscore}new{\isacharunderscore}type} at all but merely introduces its
    18.5  name. Thus we know nothing about this type, except that it is
    18.6  non-empty. Such declarations without definitions are
    18.7 -useful only if that type can be viewed as a parameter of a theory and we do
    18.8 -not intend to impose any restrictions on it. A typical example is given in
    18.9 -\S\ref{sec:VMC}, where we define transition relations over an arbitrary type
   18.10 -of states without any internal structure.
   18.11 +useful if that type can be viewed as a parameter of the theory.
   18.12 +A typical example is given in \S\ref{sec:VMC}, where we define a transition
   18.13 +relation over an arbitrary type of states.
   18.14  
   18.15  In principle we can always get rid of such type declarations by making those
   18.16  types parameters of every other type, thus keeping the theory generic. In
    19.1 --- a/doc-src/TutorialI/Types/types.tex	Tue May 01 17:16:32 2001 +0200
    19.2 +++ b/doc-src/TutorialI/Types/types.tex	Tue May 01 22:26:55 2001 +0200
    19.3 @@ -2,7 +2,7 @@
    19.4  \label{ch:more-types}
    19.5  
    19.6  So far we have learned about a few basic types (for example \isa{bool} and
    19.7 -\isa{nat}), type abbreviations (\isacommand{types}) and recursive datatpes
    19.8 +\isa{nat}), type abbreviations (\isacommand{types}) and recursive datatypes
    19.9  (\isacommand{datatype}). This chapter will introduce more
   19.10  advanced material:
   19.11  \begin{itemize}
   19.12 @@ -33,13 +33,15 @@
   19.13  \section{Records}
   19.14  \label{sec:records}
   19.15  
   19.16 -\section{Axiomatic type classes}
   19.17 +\section{Axiomatic Type Classes}
   19.18  \label{sec:axclass}
   19.19  \index{axiomatic type class|(}
   19.20  \index{*axclass|(}
   19.21  
   19.22  
   19.23  The programming language Haskell has popularized the notion of type classes.
   19.24 +In its simplest form, a type class is a set of types with a common interface:
   19.25 +all types in that class must provide the functions in the interface.
   19.26  Isabelle offers the related concept of an \textbf{axiomatic type class}.
   19.27  Roughly speaking, an axiomatic type class is a type class with axioms, i.e.\ 
   19.28  an axiomatic specification of a class of types. Thus we can talk about a type