1.1 --- a/doc-src/TutorialI/CodeGen/CodeGen.thy Fri Sep 01 18:29:52 2000 +0200
1.2 +++ b/doc-src/TutorialI/CodeGen/CodeGen.thy Fri Sep 01 19:09:44 2000 +0200
1.3 @@ -35,7 +35,7 @@
1.4 The stack machine has three instructions: load a constant value onto the
1.5 stack, load the contents of a certain address onto the stack, and apply a
1.6 binary operation to the two topmost elements of the stack, replacing them by
1.7 -the result. As for \isa{expr}, addresses and values are type parameters:
1.8 +the result. As for @{text"expr"}, addresses and values are type parameters:
1.9 *}
1.10
1.11 datatype ('a,'v) instr = Const 'v
1.12 @@ -44,7 +44,7 @@
1.13
1.14 text{*
1.15 The execution of the stack machine is modelled by a function
1.16 -\isa{exec} that takes a list of instructions, a store (modelled as a
1.17 +@{text"exec"} that takes a list of instructions, a store (modelled as a
1.18 function from addresses to values, just like the environment for
1.19 evaluating expressions), and a stack (modelled as a list) of values,
1.20 and returns the stack at the end of the execution---the store remains
1.21 @@ -60,12 +60,12 @@
1.22 | Apply f \\<Rightarrow> exec is s ((f (hd vs) (hd(tl vs)))#(tl(tl vs))))";
1.23
1.24 text{*\noindent
1.25 -Recall that \isa{hd} and \isa{tl}
1.26 +Recall that @{term"hd"} and @{term"tl"}
1.27 return the first element and the remainder of a list.
1.28 -Because all functions are total, \isa{hd} is defined even for the empty
1.29 +Because all functions are total, @{term"hd"} is defined even for the empty
1.30 list, although we do not know what the result is. Thus our model of the
1.31 machine always terminates properly, although the above definition does not
1.32 -tell us much about the result in situations where \isa{Apply} was executed
1.33 +tell us much about the result in situations where @{term"Apply"} was executed
1.34 with fewer than two elements on the stack.
1.35
1.36 The compiler is a function from expressions to a list of instructions. Its
1.37 @@ -91,7 +91,7 @@
1.38 theorem "\\<forall>vs. exec (comp e) s vs = (value e s) # vs";
1.39
1.40 txt{*\noindent
1.41 -which is proved by induction on \isa{e} followed by simplification, once
1.42 +which is proved by induction on @{term"e"} followed by simplification, once
1.43 we have the following lemma about executing the concatenation of two
1.44 instruction sequences:
1.45 *}
1.46 @@ -100,16 +100,16 @@
1.47 "\\<forall>vs. exec (xs@ys) s vs = exec ys s (exec xs s vs)";
1.48
1.49 txt{*\noindent
1.50 -This requires induction on \isa{xs} and ordinary simplification for the
1.51 +This requires induction on @{term"xs"} and ordinary simplification for the
1.52 base cases. In the induction step, simplification leaves us with a formula
1.53 -that contains two \isa{case}-expressions over instructions. Thus we add
1.54 +that contains two @{text"case"}-expressions over instructions. Thus we add
1.55 automatic case splitting as well, which finishes the proof:
1.56 *}
1.57 by(induct_tac xs, simp, simp split: instr.split);
1.58
1.59 text{*\noindent
1.60 Note that because \isaindex{auto} performs simplification, it can
1.61 -also be modified in the same way \isa{simp} can. Thus the proof can be
1.62 +also be modified in the same way @{text"simp"} can. Thus the proof can be
1.63 rewritten as
1.64 *}
1.65 (*<*)
2.1 --- a/doc-src/TutorialI/Datatype/ABexpr.thy Fri Sep 01 18:29:52 2000 +0200
2.2 +++ b/doc-src/TutorialI/Datatype/ABexpr.thy Fri Sep 01 19:09:44 2000 +0200
2.3 @@ -26,9 +26,9 @@
2.4 | Neg "'a bexp";
2.5
2.6 text{*\noindent
2.7 -Type \isa{aexp} is similar to \isa{expr} in \S\ref{sec:ExprCompiler},
2.8 -except that we have fixed the values to be of type \isa{nat} and that we
2.9 -have fixed the two binary operations \isa{Sum} and \isa{Diff}. Boolean
2.10 +Type @{text"aexp"} is similar to @{text"expr"} in \S\ref{sec:ExprCompiler},
2.11 +except that we have fixed the values to be of type @{typ"nat"} and that we
2.12 +have fixed the two binary operations @{term"Sum"} and @{term"Diff"}. Boolean
2.13 expressions can be arithmetic comparisons, conjunctions and negations.
2.14 The semantics is fixed via two evaluation functions
2.15 *}
2.16 @@ -37,8 +37,8 @@
2.17 evalb :: "'a bexp \\<Rightarrow> ('a \\<Rightarrow> nat) \\<Rightarrow> bool";
2.18
2.19 text{*\noindent
2.20 -that take an expression and an environment (a mapping from variables \isa{'a} to values
2.21 -\isa{nat}) and return its arithmetic/boolean
2.22 +that take an expression and an environment (a mapping from variables @{typ"'a"} to values
2.23 +@{typ"nat"}) and return its arithmetic/boolean
2.24 value. Since the datatypes are mutually recursive, so are functions that
2.25 operate on them. Hence they need to be defined in a single \isacommand{primrec}
2.26 section:
2.27 @@ -66,8 +66,8 @@
2.28 text{*\noindent
2.29 The first argument is a function mapping variables to expressions, the
2.30 substitution. It is applied to all variables in the second argument. As a
2.31 -result, the type of variables in the expression may change from \isa{'a}
2.32 -to \isa{'b}. Note that there are only arithmetic and no boolean variables.
2.33 +result, the type of variables in the expression may change from @{typ"'a"}
2.34 +to @{typ"'b"}. Note that there are only arithmetic and no boolean variables.
2.35 *}
2.36
2.37 primrec
2.38 @@ -108,17 +108,17 @@
2.39 an inductive proof expects a goal of the form
2.40 \[ P@1(x@1)\ \land \dots \land P@n(x@n) \]
2.41 where each variable $x@i$ is of type $\tau@i$. Induction is started by
2.42 -\begin{ttbox}
2.43 -apply(induct_tac \(x@1\) \texttt{and} \(\dots\) \texttt{and} \(x@n\));
2.44 -\end{ttbox}
2.45 +\begin{isabelle}
2.46 +\isacommand{apply}@{text"(induct_tac"} $x@1$ @{text"and"} \dots\ @{text"and"} $x@n$@{text")"}
2.47 +\end{isabelle}
2.48
2.49 \begin{exercise}
2.50 - Define a function \isa{norma} of type @{typ"'a aexp => 'a aexp"} that
2.51 - replaces \isa{IF}s with complex boolean conditions by nested
2.52 - \isa{IF}s where each condition is a \isa{Less} --- \isa{And} and
2.53 - \isa{Neg} should be eliminated completely. Prove that \isa{norma}
2.54 - preserves the value of an expression and that the result of \isa{norma}
2.55 - is really normal, i.e.\ no more \isa{And}s and \isa{Neg}s occur in
2.56 + Define a function @{text"norma"} of type @{typ"'a aexp => 'a aexp"} that
2.57 + replaces @{term"IF"}s with complex boolean conditions by nested
2.58 + @{term"IF"}s where each condition is a @{term"Less"} --- @{term"And"} and
2.59 + @{term"Neg"} should be eliminated completely. Prove that @{text"norma"}
2.60 + preserves the value of an expression and that the result of @{text"norma"}
2.61 + is really normal, i.e.\ no more @{term"And"}s and @{term"Neg"}s occur in
2.62 it. ({\em Hint:} proceed as in \S\ref{sec:boolex}).
2.63 \end{exercise}
2.64 *}
3.1 --- a/doc-src/TutorialI/Datatype/Fundata.thy Fri Sep 01 18:29:52 2000 +0200
3.2 +++ b/doc-src/TutorialI/Datatype/Fundata.thy Fri Sep 01 19:09:44 2000 +0200
3.3 @@ -3,11 +3,12 @@
3.4 (*>*)
3.5 datatype ('a,'i)bigtree = Tip | Branch 'a "'i \\<Rightarrow> ('a,'i)bigtree"
3.6
3.7 -text{*\noindent Parameter \isa{'a} is the type of values stored in
3.8 -the \isa{Branch}es of the tree, whereas \isa{'i} is the index
3.9 -type over which the tree branches. If \isa{'i} is instantiated to
3.10 -\isa{bool}, the result is a binary tree; if it is instantiated to
3.11 -\isa{nat}, we have an infinitely branching tree because each node
3.12 +text{*\noindent
3.13 +Parameter @{typ"'a"} is the type of values stored in
3.14 +the @{term"Branch"}es of the tree, whereas @{typ"'i"} is the index
3.15 +type over which the tree branches. If @{typ"'i"} is instantiated to
3.16 +@{typ"bool"}, the result is a binary tree; if it is instantiated to
3.17 +@{typ"nat"}, we have an infinitely branching tree because each node
3.18 has as many subtrees as there are natural numbers. How can we possibly
3.19 write down such a tree? Using functional notation! For example, the term
3.20 \begin{quote}
3.21 @@ -15,9 +16,9 @@
3.22 \end{quote}
3.23 of type @{typ"(nat,nat)bigtree"} is the tree whose
3.24 root is labeled with 0 and whose $i$th subtree is labeled with $i$ and
3.25 -has merely \isa{Tip}s as further subtrees.
3.26 +has merely @{term"Tip"}s as further subtrees.
3.27
3.28 -Function \isa{map_bt} applies a function to all labels in a \isa{bigtree}:
3.29 +Function @{term"map_bt"} applies a function to all labels in a @{text"bigtree"}:
3.30 *}
3.31
3.32 consts map_bt :: "('a \\<Rightarrow> 'b) \\<Rightarrow> ('a,'i)bigtree \\<Rightarrow> ('b,'i)bigtree"
3.33 @@ -26,12 +27,12 @@
3.34 "map_bt f (Branch a F) = Branch (f a) (\\<lambda>i. map_bt f (F i))"
3.35
3.36 text{*\noindent This is a valid \isacommand{primrec} definition because the
3.37 -recursive calls of \isa{map_bt} involve only subtrees obtained from
3.38 -\isa{F}, i.e.\ the left-hand side. Thus termination is assured. The
3.39 +recursive calls of @{term"map_bt"} involve only subtrees obtained from
3.40 +@{term"F"}, i.e.\ the left-hand side. Thus termination is assured. The
3.41 seasoned functional programmer might have written @{term"map_bt f o F"}
3.42 instead of @{term"%i. map_bt f (F i)"}, but the former is not accepted by
3.43 Isabelle because the termination proof is not as obvious since
3.44 -\isa{map_bt} is only partially applied.
3.45 +@{term"map_bt"} is only partially applied.
3.46
3.47 The following lemma has a canonical proof *}
3.48
4.1 --- a/doc-src/TutorialI/Datatype/Nested.thy Fri Sep 01 18:29:52 2000 +0200
4.2 +++ b/doc-src/TutorialI/Datatype/Nested.thy Fri Sep 01 19:09:44 2000 +0200
4.3 @@ -1,5 +1,5 @@
4.4 (*<*)
4.5 -theory Nested = Main:;
4.6 +theory Nested = ABexpr:
4.7 (*>*)
4.8
4.9 text{*
4.10 @@ -12,18 +12,18 @@
4.11 datatype ('a,'b)"term" = Var 'a | App 'b "('a,'b)term list";
4.12
4.13 text{*\noindent
4.14 -Note that we need to quote \isa{term} on the left to avoid confusion with
4.15 +Note that we need to quote @{text"term"} on the left to avoid confusion with
4.16 the command \isacommand{term}.
4.17 -Parameter \isa{'a} is the type of variables and \isa{'b} the type of
4.18 +Parameter @{typ"'a"} is the type of variables and @{typ"'b"} the type of
4.19 function symbols.
4.20 A mathematical term like $f(x,g(y))$ becomes @{term"App f [Var x, App g
4.21 - [Var y]]"}, where \isa{f}, \isa{g}, \isa{x}, \isa{y} are
4.22 + [Var y]]"}, where @{term"f"}, @{term"g"}, @{term"x"}, @{term"y"} are
4.23 suitable values, e.g.\ numbers or strings.
4.24
4.25 -What complicates the definition of \isa{term} is the nested occurrence of
4.26 -\isa{term} inside \isa{list} on the right-hand side. In principle,
4.27 +What complicates the definition of @{text"term"} is the nested occurrence of
4.28 +@{text"term"} inside @{text"list"} on the right-hand side. In principle,
4.29 nested recursion can be eliminated in favour of mutual recursion by unfolding
4.30 -the offending datatypes, here \isa{list}. The result for \isa{term}
4.31 +the offending datatypes, here @{text"list"}. The result for @{text"term"}
4.32 would be something like
4.33 \medskip
4.34
4.35 @@ -33,7 +33,7 @@
4.36 \noindent
4.37 Although we do not recommend this unfolding to the user, it shows how to
4.38 simulate nested recursion by mutual recursion.
4.39 -Now we return to the initial definition of \isa{term} using
4.40 +Now we return to the initial definition of @{text"term"} using
4.41 nested recursion.
4.42
4.43 Let us define a substitution function on terms. Because terms involve term
4.44 @@ -53,7 +53,7 @@
4.45 "substs s (t # ts) = subst s t # substs s ts";
4.46
4.47 text{*\noindent
4.48 -You should ignore the label \isa{subst\_App} for the moment.
4.49 +You should ignore the label @{thm[source]subst_App} for the moment.
4.50
4.51 Similarly, when proving a statement about terms inductively, we need
4.52 to prove a related statement about term lists simultaneously. For example,
4.53 @@ -66,39 +66,39 @@
4.54 by(induct_tac t and ts, simp_all);
4.55
4.56 text{*\noindent
4.57 -Note that \isa{Var} is the identity substitution because by definition it
4.58 -leaves variables unchanged: @{term"subst Var (Var x) = Var x"}. Note also
4.59 +Note that @{term"Var"} is the identity substitution because by definition it
4.60 +leaves variables unchanged: @{prop"subst Var (Var x) = Var x"}. Note also
4.61 that the type annotations are necessary because otherwise there is nothing in
4.62 the goal to enforce that both halves of the goal talk about the same type
4.63 -parameters \isa{('a,'b)}. As a result, induction would fail
4.64 +parameters @{text"('a,'b)"}. As a result, induction would fail
4.65 because the two halves of the goal would be unrelated.
4.66
4.67 \begin{exercise}
4.68 The fact that substitution distributes over composition can be expressed
4.69 roughly as follows:
4.70 -\begin{ttbox}
4.71 -subst (f o g) t = subst f (subst g t)
4.72 -\end{ttbox}
4.73 +\begin{quote}
4.74 +@{text[display]"subst (f o g) t = subst f (subst g t)"}
4.75 +\end{quote}
4.76 Correct this statement (you will find that it does not type-check),
4.77 -strengthen it, and prove it. (Note: \ttindexbold{o} is function composition;
4.78 -its definition is found in theorem \isa{o_def}).
4.79 +strengthen it, and prove it. (Note: \isaindexbold{o} is function composition;
4.80 +its definition is found in theorem @{thm[source]o_def}).
4.81 \end{exercise}
4.82 \begin{exercise}\label{ex:trev-trev}
4.83 - Define a function \isa{trev} of type @{typ"('a,'b)term => ('a,'b)term"} that
4.84 - recursively reverses the order of arguments of all function symbols in a
4.85 - term. Prove that \isa{trev(trev t) = t}.
4.86 + Define a function @{text"trev"} of type @{typ"('a,'b)term => ('a,'b)term"}
4.87 +that recursively reverses the order of arguments of all function symbols in a
4.88 + term. Prove that @{prop"trev(trev t) = t"}.
4.89 \end{exercise}
4.90
4.91 The experienced functional programmer may feel that our above definition of
4.92 -\isa{subst} is unnecessarily complicated in that \isa{substs} is completely
4.93 -unnecessary. The @{term"App"}-case can be defined directly as
4.94 +@{term"subst"} is unnecessarily complicated in that @{term"substs"} is
4.95 +completely unnecessary. The @{term"App"}-case can be defined directly as
4.96 \begin{quote}
4.97 @{term[display]"subst s (App f ts) = App f (map (subst s) ts)"}
4.98 \end{quote}
4.99 where @{term"map"} is the standard list function such that
4.100 -\isa{map f [x1,...,xn] = [f x1,...,f xn]}. This is true, but Isabelle insists
4.101 -on the above fixed format. Fortunately, we can easily \emph{prove} that the
4.102 -suggested equation holds:
4.103 +@{text"map f [x1,...,xn] = [f x1,...,f xn]"}. This is true, but Isabelle
4.104 +insists on the above fixed format. Fortunately, we can easily \emph{prove}
4.105 +that the suggested equation holds:
4.106 *}
4.107
4.108 lemma [simp]: "subst s (App f ts) = App f (map (subst s) ts)"
4.109 @@ -115,15 +115,15 @@
4.110 The advantage is that now we have replaced @{term"substs"} by
4.111 @{term"map"}, we can profit from the large number of pre-proved lemmas
4.112 about @{term"map"}. Unfortunately inductive proofs about type
4.113 -\isa{term} are still awkward because they expect a conjunction. One
4.114 +@{text"term"} are still awkward because they expect a conjunction. One
4.115 could derive a new induction principle as well (see
4.116 \S\ref{sec:derive-ind}), but turns out to be simpler to define
4.117 functions by \isacommand{recdef} instead of \isacommand{primrec}.
4.118 The details are explained in \S\ref{sec:advanced-recdef} below.
4.119
4.120 Of course, you may also combine mutual and nested recursion. For example,
4.121 -constructor \isa{Sum} in \S\ref{sec:datatype-mut-rec} could take a list of
4.122 -expressions as its argument: \isa{Sum "'a aexp list"}.
4.123 +constructor @{text"Sum"} in \S\ref{sec:datatype-mut-rec} could take a list of
4.124 +expressions as its argument: @{text"Sum"}~@{typ[quotes]"'a aexp list"}.
4.125 *}
4.126 (*<*)
4.127 end
5.1 --- a/doc-src/TutorialI/Datatype/document/ABexpr.tex Fri Sep 01 18:29:52 2000 +0200
5.2 +++ b/doc-src/TutorialI/Datatype/document/ABexpr.tex Fri Sep 01 19:09:44 2000 +0200
5.3 @@ -26,7 +26,7 @@
5.4 \noindent
5.5 Type \isa{aexp} is similar to \isa{expr} in \S\ref{sec:ExprCompiler},
5.6 except that we have fixed the values to be of type \isa{nat} and that we
5.7 -have fixed the two binary operations \isa{Sum} and \isa{Diff}. Boolean
5.8 +have fixed the two binary operations \isa{aexp{\isachardot}Sum} and \isa{Diff}. Boolean
5.9 expressions can be arithmetic comparisons, conjunctions and negations.
5.10 The semantics is fixed via two evaluation functions%
5.11 \end{isamarkuptext}%
5.12 @@ -34,7 +34,7 @@
5.13 \ \ \ \ \ \ \ \ evalb\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharprime}a\ bexp\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ nat{\isacharparenright}\ {\isasymRightarrow}\ bool{\isachardoublequote}%
5.14 \begin{isamarkuptext}%
5.15 \noindent
5.16 -that take an expression and an environment (a mapping from variables \isa{'a} to values
5.17 +that take an expression and an environment (a mapping from variables \isa{{\isacharprime}a} to values
5.18 \isa{nat}) and return its arithmetic/boolean
5.19 value. Since the datatypes are mutually recursive, so are functions that
5.20 operate on them. Hence they need to be defined in a single \isacommand{primrec}
5.21 @@ -61,8 +61,8 @@
5.22 \noindent
5.23 The first argument is a function mapping variables to expressions, the
5.24 substitution. It is applied to all variables in the second argument. As a
5.25 -result, the type of variables in the expression may change from \isa{'a}
5.26 -to \isa{'b}. Note that there are only arithmetic and no boolean variables.%
5.27 +result, the type of variables in the expression may change from \isa{{\isacharprime}a}
5.28 +to \isa{{\isacharprime}b}. Note that there are only arithmetic and no boolean variables.%
5.29 \end{isamarkuptext}%
5.30 \isacommand{primrec}\isanewline
5.31 \ \ {\isachardoublequote}substa\ s\ {\isacharparenleft}IF\ b\ a\isadigit{1}\ a\isadigit{2}{\isacharparenright}\ {\isacharequal}\isanewline
5.32 @@ -98,12 +98,12 @@
5.33 an inductive proof expects a goal of the form
5.34 \[ P@1(x@1)\ \land \dots \land P@n(x@n) \]
5.35 where each variable $x@i$ is of type $\tau@i$. Induction is started by
5.36 -\begin{ttbox}
5.37 -apply(induct_tac \(x@1\) \texttt{and} \(\dots\) \texttt{and} \(x@n\));
5.38 -\end{ttbox}
5.39 +\begin{isabelle}
5.40 +\isacommand{apply}\isa{{\isacharparenleft}induct{\isacharunderscore}tac} $x@1$ \isa{and} \dots\ \isa{and} $x@n$\isa{{\isacharparenright}}
5.41 +\end{isabelle}
5.42
5.43 \begin{exercise}
5.44 - Define a function \isa{norma} of type \isa{\mbox{{\isacharprime}a}\ aexp\ {\isasymRightarrow}\ \mbox{{\isacharprime}a}\ aexp} that
5.45 + Define a function \isa{norma} of type \isa{{\isacharprime}a\ aexp\ {\isasymRightarrow}\ {\isacharprime}a\ aexp} that
5.46 replaces \isa{IF}s with complex boolean conditions by nested
5.47 \isa{IF}s where each condition is a \isa{Less} --- \isa{And} and
5.48 \isa{Neg} should be eliminated completely. Prove that \isa{norma}
6.1 --- a/doc-src/TutorialI/Datatype/document/Fundata.tex Fri Sep 01 18:29:52 2000 +0200
6.2 +++ b/doc-src/TutorialI/Datatype/document/Fundata.tex Fri Sep 01 19:09:44 2000 +0200
6.3 @@ -2,9 +2,10 @@
6.4 \begin{isabellebody}%
6.5 \isacommand{datatype}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}i{\isacharparenright}bigtree\ {\isacharequal}\ Tip\ {\isacharbar}\ Branch\ {\isacharprime}a\ {\isachardoublequote}{\isacharprime}i\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}i{\isacharparenright}bigtree{\isachardoublequote}%
6.6 \begin{isamarkuptext}%
6.7 -\noindent Parameter \isa{'a} is the type of values stored in
6.8 -the \isa{Branch}es of the tree, whereas \isa{'i} is the index
6.9 -type over which the tree branches. If \isa{'i} is instantiated to
6.10 +\noindent
6.11 +Parameter \isa{{\isacharprime}a} is the type of values stored in
6.12 +the \isa{Branch}es of the tree, whereas \isa{{\isacharprime}i} is the index
6.13 +type over which the tree branches. If \isa{{\isacharprime}i} is instantiated to
6.14 \isa{bool}, the result is a binary tree; if it is instantiated to
6.15 \isa{nat}, we have an infinitely branching tree because each node
6.16 has as many subtrees as there are natural numbers. How can we possibly
6.17 @@ -12,7 +13,7 @@
6.18 \begin{quote}
6.19
6.20 \begin{isabelle}%
6.21 -Branch\ \isadigit{0}\ {\isacharparenleft}{\isasymlambda}\mbox{i}{\isachardot}\ Branch\ \mbox{i}\ {\isacharparenleft}{\isasymlambda}\mbox{n}{\isachardot}\ Tip{\isacharparenright}{\isacharparenright}
6.22 +Branch\ \isadigit{0}\ {\isacharparenleft}{\isasymlambda}i{\isachardot}\ Branch\ i\ {\isacharparenleft}{\isasymlambda}n{\isachardot}\ Tip{\isacharparenright}{\isacharparenright}
6.23 \end{isabelle}%
6.24
6.25 \end{quote}
6.26 @@ -20,7 +21,7 @@
6.27 root is labeled with 0 and whose $i$th subtree is labeled with $i$ and
6.28 has merely \isa{Tip}s as further subtrees.
6.29
6.30 -Function \isa{map_bt} applies a function to all labels in a \isa{bigtree}:%
6.31 +Function \isa{map{\isacharunderscore}bt} applies a function to all labels in a \isa{bigtree}:%
6.32 \end{isamarkuptext}%
6.33 \isacommand{consts}\ map{\isacharunderscore}bt\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isacharprime}a\ {\isasymRightarrow}\ {\isacharprime}b{\isacharparenright}\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}i{\isacharparenright}bigtree\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}b{\isacharcomma}{\isacharprime}i{\isacharparenright}bigtree{\isachardoublequote}\isanewline
6.34 \isacommand{primrec}\isanewline
6.35 @@ -28,12 +29,12 @@
6.36 {\isachardoublequote}map{\isacharunderscore}bt\ f\ {\isacharparenleft}Branch\ a\ F{\isacharparenright}\ {\isacharequal}\ Branch\ {\isacharparenleft}f\ a{\isacharparenright}\ {\isacharparenleft}{\isasymlambda}i{\isachardot}\ map{\isacharunderscore}bt\ f\ {\isacharparenleft}F\ i{\isacharparenright}{\isacharparenright}{\isachardoublequote}%
6.37 \begin{isamarkuptext}%
6.38 \noindent This is a valid \isacommand{primrec} definition because the
6.39 -recursive calls of \isa{map_bt} involve only subtrees obtained from
6.40 +recursive calls of \isa{map{\isacharunderscore}bt} involve only subtrees obtained from
6.41 \isa{F}, i.e.\ the left-hand side. Thus termination is assured. The
6.42 -seasoned functional programmer might have written \isa{map{\isacharunderscore}bt\ \mbox{f}\ {\isasymcirc}\ \mbox{F}}
6.43 -instead of \isa{{\isasymlambda}\mbox{i}{\isachardot}\ map{\isacharunderscore}bt\ \mbox{f}\ {\isacharparenleft}\mbox{F}\ \mbox{i}{\isacharparenright}}, but the former is not accepted by
6.44 +seasoned functional programmer might have written \isa{map{\isacharunderscore}bt\ f\ {\isasymcirc}\ F}
6.45 +instead of \isa{{\isasymlambda}i{\isachardot}\ map{\isacharunderscore}bt\ f\ {\isacharparenleft}F\ i{\isacharparenright}}, but the former is not accepted by
6.46 Isabelle because the termination proof is not as obvious since
6.47 -\isa{map_bt} is only partially applied.
6.48 +\isa{map{\isacharunderscore}bt} is only partially applied.
6.49
6.50 The following lemma has a canonical proof%
6.51 \end{isamarkuptext}%
7.1 --- a/doc-src/TutorialI/Datatype/document/Nested.tex Fri Sep 01 18:29:52 2000 +0200
7.2 +++ b/doc-src/TutorialI/Datatype/document/Nested.tex Fri Sep 01 19:09:44 2000 +0200
7.3 @@ -12,9 +12,9 @@
7.4 \noindent
7.5 Note that we need to quote \isa{term} on the left to avoid confusion with
7.6 the command \isacommand{term}.
7.7 -Parameter \isa{'a} is the type of variables and \isa{'b} the type of
7.8 +Parameter \isa{{\isacharprime}a} is the type of variables and \isa{{\isacharprime}b} the type of
7.9 function symbols.
7.10 -A mathematical term like $f(x,g(y))$ becomes \isa{App\ \mbox{f}\ {\isacharbrackleft}Var\ \mbox{x}{\isacharcomma}\ App\ \mbox{g}\ {\isacharbrackleft}Var\ \mbox{y}{\isacharbrackright}{\isacharbrackright}}, where \isa{f}, \isa{g}, \isa{x}, \isa{y} are
7.11 +A mathematical term like $f(x,g(y))$ becomes \isa{App\ f\ {\isacharbrackleft}term{\isachardot}Var\ x{\isacharcomma}\ App\ g\ {\isacharbrackleft}term{\isachardot}Var\ y{\isacharbrackright}{\isacharbrackright}}, where \isa{f}, \isa{g}, \isa{x}, \isa{y} are
7.12 suitable values, e.g.\ numbers or strings.
7.13
7.14 What complicates the definition of \isa{term} is the nested occurrence of
7.15 @@ -49,7 +49,7 @@
7.16 \ \ {\isachardoublequote}substs\ s\ {\isacharparenleft}t\ {\isacharhash}\ ts{\isacharparenright}\ {\isacharequal}\ subst\ s\ t\ {\isacharhash}\ substs\ s\ ts{\isachardoublequote}%
7.17 \begin{isamarkuptext}%
7.18 \noindent
7.19 -You should ignore the label \isa{subst\_App} for the moment.
7.20 +You should ignore the label \isa{subst{\isacharunderscore}App} for the moment.
7.21
7.22 Similarly, when proving a statement about terms inductively, we need
7.23 to prove a related statement about term lists simultaneously. For example,
7.24 @@ -61,43 +61,47 @@
7.25 \isacommand{by}{\isacharparenleft}induct{\isacharunderscore}tac\ t\ \isakeyword{and}\ ts{\isacharcomma}\ simp{\isacharunderscore}all{\isacharparenright}%
7.26 \begin{isamarkuptext}%
7.27 \noindent
7.28 -Note that \isa{Var} is the identity substitution because by definition it
7.29 -leaves variables unchanged: \isa{subst\ Var\ {\isacharparenleft}Var\ \mbox{x}{\isacharparenright}\ {\isacharequal}\ Var\ \mbox{x}}. Note also
7.30 +Note that \isa{term{\isachardot}Var} is the identity substitution because by definition it
7.31 +leaves variables unchanged: \isa{subst\ term{\isachardot}Var\ {\isacharparenleft}term{\isachardot}Var\ x{\isacharparenright}\ {\isacharequal}\ term{\isachardot}Var\ x}. Note also
7.32 that the type annotations are necessary because otherwise there is nothing in
7.33 the goal to enforce that both halves of the goal talk about the same type
7.34 -parameters \isa{('a,'b)}. As a result, induction would fail
7.35 +parameters \isa{{\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}b{\isacharparenright}}. As a result, induction would fail
7.36 because the two halves of the goal would be unrelated.
7.37
7.38 \begin{exercise}
7.39 The fact that substitution distributes over composition can be expressed
7.40 roughly as follows:
7.41 -\begin{ttbox}
7.42 -subst (f o g) t = subst f (subst g t)
7.43 -\end{ttbox}
7.44 +\begin{quote}
7.45 +
7.46 +\begin{isabelle}%
7.47 +subst\ {\isacharparenleft}f\ o\ g{\isacharparenright}\ t\ {\isacharequal}\ subst\ f\ {\isacharparenleft}subst\ g\ t{\isacharparenright}
7.48 +\end{isabelle}%
7.49 +
7.50 +\end{quote}
7.51 Correct this statement (you will find that it does not type-check),
7.52 -strengthen it, and prove it. (Note: \ttindexbold{o} is function composition;
7.53 -its definition is found in theorem \isa{o_def}).
7.54 +strengthen it, and prove it. (Note: \isaindexbold{o} is function composition;
7.55 +its definition is found in theorem \isa{o{\isacharunderscore}def}).
7.56 \end{exercise}
7.57 \begin{exercise}\label{ex:trev-trev}
7.58 - Define a function \isa{trev} of type \isa{{\isacharparenleft}\mbox{{\isacharprime}a}{\isacharcomma}\ \mbox{{\isacharprime}b}{\isacharparenright}\ term\ {\isasymRightarrow}\ {\isacharparenleft}\mbox{{\isacharprime}a}{\isacharcomma}\ \mbox{{\isacharprime}b}{\isacharparenright}\ term} that
7.59 - recursively reverses the order of arguments of all function symbols in a
7.60 - term. Prove that \isa{trev(trev t) = t}.
7.61 + Define a function \isa{trev} of type \isa{{\isacharparenleft}{\isacharprime}a{\isacharcomma}\ {\isacharprime}b{\isacharparenright}\ term\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}\ {\isacharprime}b{\isacharparenright}\ term}
7.62 +that recursively reverses the order of arguments of all function symbols in a
7.63 + term. Prove that \isa{trev\ {\isacharparenleft}trev\ t{\isacharparenright}\ {\isacharequal}\ t}.
7.64 \end{exercise}
7.65
7.66 The experienced functional programmer may feel that our above definition of
7.67 -\isa{subst} is unnecessarily complicated in that \isa{substs} is completely
7.68 -unnecessary. The \isa{App}-case can be defined directly as
7.69 +\isa{subst} is unnecessarily complicated in that \isa{substs} is
7.70 +completely unnecessary. The \isa{App}-case can be defined directly as
7.71 \begin{quote}
7.72
7.73 \begin{isabelle}%
7.74 -subst\ \mbox{s}\ {\isacharparenleft}App\ \mbox{f}\ \mbox{ts}{\isacharparenright}\ {\isacharequal}\ App\ \mbox{f}\ {\isacharparenleft}map\ {\isacharparenleft}subst\ \mbox{s}{\isacharparenright}\ \mbox{ts}{\isacharparenright}
7.75 +subst\ s\ {\isacharparenleft}App\ f\ ts{\isacharparenright}\ {\isacharequal}\ App\ f\ {\isacharparenleft}map\ {\isacharparenleft}subst\ s{\isacharparenright}\ ts{\isacharparenright}
7.76 \end{isabelle}%
7.77
7.78 \end{quote}
7.79 where \isa{map} is the standard list function such that
7.80 -\isa{map f [x1,...,xn] = [f x1,...,f xn]}. This is true, but Isabelle insists
7.81 -on the above fixed format. Fortunately, we can easily \emph{prove} that the
7.82 -suggested equation holds:%
7.83 +\isa{map\ f\ {\isacharbrackleft}x\isadigit{1}{\isacharcomma}{\isachardot}{\isachardot}{\isachardot}{\isacharcomma}xn{\isacharbrackright}\ {\isacharequal}\ {\isacharbrackleft}f\ x\isadigit{1}{\isacharcomma}{\isachardot}{\isachardot}{\isachardot}{\isacharcomma}f\ xn{\isacharbrackright}}. This is true, but Isabelle
7.84 +insists on the above fixed format. Fortunately, we can easily \emph{prove}
7.85 +that the suggested equation holds:%
7.86 \end{isamarkuptext}%
7.87 \isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}subst\ s\ {\isacharparenleft}App\ f\ ts{\isacharparenright}\ {\isacharequal}\ App\ f\ {\isacharparenleft}map\ {\isacharparenleft}subst\ s{\isacharparenright}\ ts{\isacharparenright}{\isachardoublequote}\isanewline
7.88 \isacommand{by}{\isacharparenleft}induct{\isacharunderscore}tac\ ts{\isacharcomma}\ simp{\isacharunderscore}all{\isacharparenright}%
7.89 @@ -120,7 +124,7 @@
7.90
7.91 Of course, you may also combine mutual and nested recursion. For example,
7.92 constructor \isa{Sum} in \S\ref{sec:datatype-mut-rec} could take a list of
7.93 -expressions as its argument: \isa{Sum "'a aexp list"}.%
7.94 +expressions as its argument: \isa{Sum}~\isa{{\isachardoublequote}{\isacharprime}a\ aexp\ list{\isachardoublequote}}.%
7.95 \end{isamarkuptext}%
7.96 \end{isabellebody}%
7.97 %%% Local Variables:
8.1 --- a/doc-src/TutorialI/Ifexpr/Ifexpr.thy Fri Sep 01 18:29:52 2000 +0200
8.2 +++ b/doc-src/TutorialI/Ifexpr/Ifexpr.thy Fri Sep 01 19:09:44 2000 +0200
8.3 @@ -16,16 +16,16 @@
8.4 text{*\noindent
8.5 The two constants are represented by @{term"Const True"} and
8.6 @{term"Const False"}. Variables are represented by terms of the form
8.7 -@{term"Var n"}, where @{term"n"} is a natural number (type \isa{nat}).
8.8 +@{term"Var n"}, where @{term"n"} is a natural number (type @{typ"nat"}).
8.9 For example, the formula $P@0 \land \neg P@1$ is represented by the term
8.10 @{term"And (Var 0) (Neg(Var 1))"}.
8.11
8.12 \subsubsection{What is the value of a boolean expression?}
8.13
8.14 The value of a boolean expression depends on the value of its variables.
8.15 -Hence the function \isa{value} takes an additional parameter, an {\em
8.16 - environment} of type @{typ"nat => bool"}, which maps variables to
8.17 -their values:
8.18 +Hence the function @{text"value"} takes an additional parameter, an
8.19 +\emph{environment} of type @{typ"nat => bool"}, which maps variables to their
8.20 +values:
8.21 *}
8.22
8.23 consts value :: "boolex \\<Rightarrow> (nat \\<Rightarrow> bool) \\<Rightarrow> bool";
8.24 @@ -40,14 +40,14 @@
8.25
8.26 An alternative and often more efficient (because in a certain sense
8.27 canonical) representation are so-called \emph{If-expressions} built up
8.28 -from constants (\isa{CIF}), variables (\isa{VIF}) and conditionals
8.29 -(\isa{IF}):
8.30 +from constants (@{term"CIF"}), variables (@{term"VIF"}) and conditionals
8.31 +(@{term"IF"}):
8.32 *}
8.33
8.34 datatype ifex = CIF bool | VIF nat | IF ifex ifex ifex;
8.35
8.36 text{*\noindent
8.37 -The evaluation if If-expressions proceeds as for \isa{boolex}:
8.38 +The evaluation if If-expressions proceeds as for @{typ"boolex"}:
8.39 *}
8.40
8.41 consts valif :: "ifex \\<Rightarrow> (nat \\<Rightarrow> bool) \\<Rightarrow> bool";
8.42 @@ -60,9 +60,9 @@
8.43 text{*
8.44 \subsubsection{Transformation into and of If-expressions}
8.45
8.46 -The type \isa{boolex} is close to the customary representation of logical
8.47 -formulae, whereas \isa{ifex} is designed for efficiency. It is easy to
8.48 -translate from \isa{boolex} into \isa{ifex}:
8.49 +The type @{typ"boolex"} is close to the customary representation of logical
8.50 +formulae, whereas @{typ"ifex"} is designed for efficiency. It is easy to
8.51 +translate from @{typ"boolex"} into @{typ"ifex"}:
8.52 *}
8.53
8.54 consts bool2if :: "boolex \\<Rightarrow> ifex";
8.55 @@ -73,7 +73,7 @@
8.56 "bool2if (And b c) = IF (bool2if b) (bool2if c) (CIF False)";
8.57
8.58 text{*\noindent
8.59 -At last, we have something we can verify: that \isa{bool2if} preserves the
8.60 +At last, we have something we can verify: that @{term"bool2if"} preserves the
8.61 value of its argument:
8.62 *}
8.63
8.64 @@ -91,7 +91,7 @@
8.65 not show them below.
8.66
8.67 More interesting is the transformation of If-expressions into a normal form
8.68 -where the first argument of \isa{IF} cannot be another \isa{IF} but
8.69 +where the first argument of @{term"IF"} cannot be another @{term"IF"} but
8.70 must be a constant or variable. Such a normal form can be computed by
8.71 repeatedly replacing a subterm of the form @{term"IF (IF b x y) z u"} by
8.72 @{term"IF b (IF x z u) (IF y z u)"}, which has the same value. The following
8.73 @@ -120,7 +120,7 @@
8.74
8.75 text{*\noindent
8.76 The proof is canonical, provided we first show the following simplification
8.77 -lemma (which also helps to understand what \isa{normif} does):
8.78 +lemma (which also helps to understand what @{term"normif"} does):
8.79 *}
8.80
8.81 lemma [simp]:
8.82 @@ -135,9 +135,9 @@
8.83 (*>*)
8.84 text{*\noindent
8.85 Note that the lemma does not have a name, but is implicitly used in the proof
8.86 -of the theorem shown above because of the \isa{[simp]} attribute.
8.87 +of the theorem shown above because of the @{text"[simp]"} attribute.
8.88
8.89 -But how can we be sure that \isa{norm} really produces a normal form in
8.90 +But how can we be sure that @{term"norm"} really produces a normal form in
8.91 the above sense? We define a function that tests If-expressions for normality
8.92 *}
8.93
8.94 @@ -149,8 +149,8 @@
8.95 (case b of CIF b \\<Rightarrow> True | VIF x \\<Rightarrow> True | IF x y z \\<Rightarrow> False))";
8.96
8.97 text{*\noindent
8.98 -and prove \isa{normal(norm b)}. Of course, this requires a lemma about
8.99 -normality of \isa{normif}:
8.100 +and prove @{term"normal(norm b)"}. Of course, this requires a lemma about
8.101 +normality of @{term"normif"}:
8.102 *}
8.103
8.104 lemma[simp]: "\\<forall>t e. normal(normif b t e) = (normal t \\<and> normal e)";
9.1 --- a/doc-src/TutorialI/Ifexpr/document/Ifexpr.tex Fri Sep 01 18:29:52 2000 +0200
9.2 +++ b/doc-src/TutorialI/Ifexpr/document/Ifexpr.tex Fri Sep 01 19:09:44 2000 +0200
9.3 @@ -14,16 +14,16 @@
9.4 \noindent
9.5 The two constants are represented by \isa{Const\ True} and
9.6 \isa{Const\ False}. Variables are represented by terms of the form
9.7 -\isa{Var\ \mbox{n}}, where \isa{\mbox{n}} is a natural number (type \isa{nat}).
9.8 +\isa{Var\ n}, where \isa{n} is a natural number (type \isa{nat}).
9.9 For example, the formula $P@0 \land \neg P@1$ is represented by the term
9.10 \isa{And\ {\isacharparenleft}Var\ \isadigit{0}{\isacharparenright}\ {\isacharparenleft}Neg\ {\isacharparenleft}Var\ \isadigit{1}{\isacharparenright}{\isacharparenright}}.
9.11
9.12 \subsubsection{What is the value of a boolean expression?}
9.13
9.14 The value of a boolean expression depends on the value of its variables.
9.15 -Hence the function \isa{value} takes an additional parameter, an {\em
9.16 - environment} of type \isa{nat\ {\isasymRightarrow}\ bool}, which maps variables to
9.17 -their values:%
9.18 +Hence the function \isa{value} takes an additional parameter, an
9.19 +\emph{environment} of type \isa{nat\ {\isasymRightarrow}\ bool}, which maps variables to their
9.20 +values:%
9.21 \end{isamarkuptext}%
9.22 \isacommand{consts}\ value\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}boolex\ {\isasymRightarrow}\ {\isacharparenleft}nat\ {\isasymRightarrow}\ bool{\isacharparenright}\ {\isasymRightarrow}\ bool{\isachardoublequote}\isanewline
9.23 \isacommand{primrec}\isanewline
9.24 @@ -66,7 +66,7 @@
9.25 {\isachardoublequote}bool\isadigit{2}if\ {\isacharparenleft}And\ b\ c{\isacharparenright}\ {\isacharequal}\ IF\ {\isacharparenleft}bool\isadigit{2}if\ b{\isacharparenright}\ {\isacharparenleft}bool\isadigit{2}if\ c{\isacharparenright}\ {\isacharparenleft}CIF\ False{\isacharparenright}{\isachardoublequote}%
9.26 \begin{isamarkuptext}%
9.27 \noindent
9.28 -At last, we have something we can verify: that \isa{bool2if} preserves the
9.29 +At last, we have something we can verify: that \isa{bool\isadigit{2}if} preserves the
9.30 value of its argument:%
9.31 \end{isamarkuptext}%
9.32 \isacommand{lemma}\ {\isachardoublequote}valif\ {\isacharparenleft}bool\isadigit{2}if\ b{\isacharparenright}\ env\ {\isacharequal}\ value\ b\ env{\isachardoublequote}%
9.33 @@ -84,8 +84,8 @@
9.34 More interesting is the transformation of If-expressions into a normal form
9.35 where the first argument of \isa{IF} cannot be another \isa{IF} but
9.36 must be a constant or variable. Such a normal form can be computed by
9.37 -repeatedly replacing a subterm of the form \isa{IF\ {\isacharparenleft}IF\ \mbox{b}\ \mbox{x}\ \mbox{y}{\isacharparenright}\ \mbox{z}\ \mbox{u}} by
9.38 -\isa{IF\ \mbox{b}\ {\isacharparenleft}IF\ \mbox{x}\ \mbox{z}\ \mbox{u}{\isacharparenright}\ {\isacharparenleft}IF\ \mbox{y}\ \mbox{z}\ \mbox{u}{\isacharparenright}}, which has the same value. The following
9.39 +repeatedly replacing a subterm of the form \isa{IF\ {\isacharparenleft}IF\ b\ x\ y{\isacharparenright}\ z\ u} by
9.40 +\isa{IF\ b\ {\isacharparenleft}IF\ x\ z\ u{\isacharparenright}\ {\isacharparenleft}IF\ y\ z\ u{\isacharparenright}}, which has the same value. The following
9.41 primitive recursive functions perform this task:%
9.42 \end{isamarkuptext}%
9.43 \isacommand{consts}\ normif\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}ifex\ {\isasymRightarrow}\ ifex\ {\isasymRightarrow}\ ifex\ {\isasymRightarrow}\ ifex{\isachardoublequote}\isanewline
9.44 @@ -116,7 +116,7 @@
9.45 \begin{isamarkuptext}%
9.46 \noindent
9.47 Note that the lemma does not have a name, but is implicitly used in the proof
9.48 -of the theorem shown above because of the \isa{[simp]} attribute.
9.49 +of the theorem shown above because of the \isa{{\isacharbrackleft}simp{\isacharbrackright}} attribute.
9.50
9.51 But how can we be sure that \isa{norm} really produces a normal form in
9.52 the above sense? We define a function that tests If-expressions for normality%
9.53 @@ -129,7 +129,7 @@
9.54 \ \ \ \ \ {\isacharparenleft}case\ b\ of\ CIF\ b\ {\isasymRightarrow}\ True\ {\isacharbar}\ VIF\ x\ {\isasymRightarrow}\ True\ {\isacharbar}\ IF\ x\ y\ z\ {\isasymRightarrow}\ False{\isacharparenright}{\isacharparenright}{\isachardoublequote}%
9.55 \begin{isamarkuptext}%
9.56 \noindent
9.57 -and prove \isa{normal(norm b)}. Of course, this requires a lemma about
9.58 +and prove \isa{normal\ {\isacharparenleft}norm\ b{\isacharparenright}}. Of course, this requires a lemma about
9.59 normality of \isa{normif}:%
9.60 \end{isamarkuptext}%
9.61 \isacommand{lemma}{\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}{\isasymforall}t\ e{\isachardot}\ normal{\isacharparenleft}normif\ b\ t\ e{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}normal\ t\ {\isasymand}\ normal\ e{\isacharparenright}{\isachardoublequote}\end{isabellebody}%
10.1 --- a/doc-src/TutorialI/Misc/AdvancedInd.thy Fri Sep 01 18:29:52 2000 +0200
10.2 +++ b/doc-src/TutorialI/Misc/AdvancedInd.thy Fri Sep 01 19:09:44 2000 +0200
10.3 @@ -21,7 +21,7 @@
10.4 apply(induct_tac xs);
10.5
10.6 txt{*\noindent
10.7 -(where \isa{hd} and \isa{last} return the first and last element of a
10.8 +(where @{term"hd"} and @{term"last"} return the first and last element of a
10.9 non-empty list)
10.10 produces the warning
10.11 \begin{quote}\tt
10.12 @@ -35,16 +35,16 @@
10.13 \begin{isabelle}
10.14 \ 1.\ xs\ {\isasymnoteq}\ []\ {\isasymLongrightarrow}\ hd\ []\ =\ last\ []
10.15 \end{isabelle}
10.16 -We cannot prove this equality because we do not know what \isa{hd} and
10.17 -\isa{last} return when applied to \isa{[]}.
10.18 +We cannot prove this equality because we do not know what @{term"hd"} and
10.19 +@{term"last"} return when applied to @{term"[]"}.
10.20
10.21 The point is that we have violated the above warning. Because the induction
10.22 -formula is only the conclusion, the occurrence of \isa{xs} in the premises is
10.23 +formula is only the conclusion, the occurrence of @{term"xs"} in the premises is
10.24 not modified by induction. Thus the case that should have been trivial
10.25 becomes unprovable. Fortunately, the solution is easy:
10.26 \begin{quote}
10.27 \emph{Pull all occurrences of the induction variable into the conclusion
10.28 -using \isa{\isasymlongrightarrow}.}
10.29 +using @{text"\<longrightarrow>"}.}
10.30 \end{quote}
10.31 This means we should prove
10.32 *};
10.33 @@ -59,12 +59,13 @@
10.34 \begin{isabelle}
10.35 \ 1.\ []\ {\isasymnoteq}\ []\ {\isasymlongrightarrow}\ hd\ (rev\ [])\ =\ last\ []
10.36 \end{isabelle}
10.37 -which is trivial, and \isa{auto} finishes the whole proof.
10.38 +which is trivial, and @{text"auto"} finishes the whole proof.
10.39
10.40 -If \isa{hd\_rev} is meant to be a simplification rule, you are done. But if you
10.41 -really need the \isa{\isasymLongrightarrow}-version of \isa{hd\_rev}, for
10.42 -example because you want to apply it as an introduction rule, you need to
10.43 -derive it separately, by combining it with modus ponens:
10.44 +If @{thm[source]hd_rev} is meant to be a simplification rule, you are
10.45 +done. But if you really need the @{text"\<Longrightarrow>"}-version of
10.46 +@{thm[source]hd_rev}, for example because you want to apply it as an
10.47 +introduction rule, you need to derive it separately, by combining it with
10.48 +modus ponens:
10.49 *};
10.50
10.51 lemmas hd_revI = hd_rev[THEN mp];
10.52 @@ -78,7 +79,7 @@
10.53 (see the remark?? in \S\ref{??}).
10.54 Additionally, you may also have to universally quantify some other variables,
10.55 which can yield a fairly complex conclusion.
10.56 -Here is a simple example (which is proved by \isa{blast}):
10.57 +Here is a simple example (which is proved by @{text"blast"}):
10.58 *};
10.59
10.60 lemma simple: "\\<forall>y. A y \\<longrightarrow> B y \<longrightarrow> B y & A y";
10.61 @@ -86,14 +87,14 @@
10.62
10.63 text{*\noindent
10.64 You can get the desired lemma by explicit
10.65 -application of modus ponens and \isa{spec}:
10.66 +application of modus ponens and @{thm[source]spec}:
10.67 *};
10.68
10.69 lemmas myrule = simple[THEN spec, THEN mp, THEN mp];
10.70
10.71 text{*\noindent
10.72 -or the wholesale stripping of \isa{\isasymforall} and
10.73 -\isa{\isasymlongrightarrow} in the conclusion via \isa{rulify}
10.74 +or the wholesale stripping of @{text"\<forall>"} and
10.75 +@{text"\<longrightarrow>"} in the conclusion via @{text"rulify"}
10.76 *};
10.77
10.78 lemmas myrule = simple[rulify];
10.79 @@ -130,10 +131,10 @@
10.80 induction schema. In such cases some existing standard induction schema can
10.81 be helpful. We show how to apply such induction schemas by an example.
10.82
10.83 -Structural induction on \isa{nat} is
10.84 +Structural induction on @{typ"nat"} is
10.85 usually known as ``mathematical induction''. There is also ``complete
10.86 induction'', where you must prove $P(n)$ under the assumption that $P(m)$
10.87 -holds for all $m<n$. In Isabelle, this is the theorem \isa{less\_induct}:
10.88 +holds for all $m<n$. In Isabelle, this is the theorem @{thm[source]less_induct}:
10.89 \begin{quote}
10.90 @{thm[display]"less_induct"[no_vars]}
10.91 \end{quote}
10.92 @@ -148,7 +149,7 @@
10.93 discouraged, because of the danger of inconsistencies. The above axiom does
10.94 not introduce an inconsistency because, for example, the identity function
10.95 satisfies it.}
10.96 -for \isa{f} it follows that @{term"n <= f n"}, which can
10.97 +for @{term"f"} it follows that @{prop"n <= f n"}, which can
10.98 be proved by induction on @{term"f n"}. Following the recipy outlined
10.99 above, we have to phrase the proposition as follows to allow induction:
10.100 *};
10.101 @@ -156,7 +157,7 @@
10.102 lemma f_incr_lem: "\\<forall>i. k = f i \\<longrightarrow> i \\<le> f i";
10.103
10.104 txt{*\noindent
10.105 -To perform induction on \isa{k} using \isa{less\_induct}, we use the same
10.106 +To perform induction on @{term"k"} using @{thm[source]less_induct}, we use the same
10.107 general induction method as for recursion induction (see
10.108 \S\ref{sec:recdef-induction}):
10.109 *};
10.110 @@ -173,9 +174,9 @@
10.111 \ 1.\ {\isasymAnd}\mbox{n}.\ {\isasymforall}\mbox{m}.\ \mbox{m}\ <\ \mbox{n}\ {\isasymlongrightarrow}\ ({\isasymforall}\mbox{i}.\ \mbox{m}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i})\isanewline
10.112 \ \ \ \ \ \ \ {\isasymLongrightarrow}\ {\isasymforall}\mbox{i}.\ \mbox{n}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i}
10.113 \end{isabelle}
10.114 -After stripping the \isa{\isasymforall i}, the proof continues with a case
10.115 -distinction on \isa{i}. The case \isa{i = 0} is trivial and we focus on the
10.116 -other case:
10.117 +After stripping the @{text"\<forall>i"}, the proof continues with a case
10.118 +distinction on @{term"i"}. The case @{prop"i = 0"} is trivial and we focus on
10.119 +the other case:
10.120 \begin{isabelle}
10.121 \ 1.\ {\isasymAnd}\mbox{n}\ \mbox{i}\ \mbox{nat}.\isanewline
10.122 \ \ \ \ \ \ \ {\isasymlbrakk}{\isasymforall}\mbox{m}.\ \mbox{m}\ <\ \mbox{n}\ {\isasymlongrightarrow}\ ({\isasymforall}\mbox{i}.\ \mbox{m}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i});\ \mbox{i}\ =\ Suc\ \mbox{nat}{\isasymrbrakk}\isanewline
10.123 @@ -187,16 +188,16 @@
10.124
10.125 text{*\noindent
10.126 It is not surprising if you find the last step puzzling.
10.127 -The proof goes like this (writing \isa{j} instead of \isa{nat}).
10.128 -Since @{term"i = Suc j"} it suffices to show
10.129 -@{term"j < f(Suc j)"} (by \isa{Suc\_leI}: @{thm"Suc_leI"[no_vars]}). This is
10.130 -proved as follows. From \isa{f\_ax} we have @{term"f (f j) < f (Suc j)"}
10.131 -(1) which implies @{term"f j <= f (f j)"} (by the induction hypothesis).
10.132 -Using (1) once more we obtain @{term"f j < f(Suc j)"} (2) by transitivity
10.133 -(\isa{le_less_trans}: @{thm"le_less_trans"[no_vars]}).
10.134 -Using the induction hypothesis once more we obtain @{term"j <= f j"}
10.135 -which, together with (2) yields @{term"j < f (Suc j)"} (again by
10.136 -\isa{le_less_trans}).
10.137 +The proof goes like this (writing @{term"j"} instead of @{typ"nat"}).
10.138 +Since @{prop"i = Suc j"} it suffices to show
10.139 +@{prop"j < f(Suc j)"} (by @{thm[source]Suc_leI}: @{thm"Suc_leI"[no_vars]}). This is
10.140 +proved as follows. From @{thm[source]f_ax} we have @{prop"f (f j) < f (Suc j)"}
10.141 +(1) which implies @{prop"f j <= f (f j)"} (by the induction hypothesis).
10.142 +Using (1) once more we obtain @{prop"f j < f(Suc j)"} (2) by transitivity
10.143 +(@{thm[source]le_less_trans}: @{thm"le_less_trans"[no_vars]}).
10.144 +Using the induction hypothesis once more we obtain @{prop"j <= f j"}
10.145 +which, together with (2) yields @{prop"j < f (Suc j)"} (again by
10.146 +@{thm[source]le_less_trans}).
10.147
10.148 This last step shows both the power and the danger of automatic proofs: they
10.149 will usually not tell you how the proof goes, because it can be very hard to
10.150 @@ -204,14 +205,14 @@
10.151 \S\ref{sec:part2?} introduces a language for writing readable yet concise
10.152 proofs.
10.153
10.154 -We can now derive the desired @{term"i <= f i"} from \isa{f\_incr}:
10.155 +We can now derive the desired @{prop"i <= f i"} from @{text"f_incr"}:
10.156 *};
10.157
10.158 lemmas f_incr = f_incr_lem[rulify, OF refl];
10.159
10.160 text{*\noindent
10.161 -The final \isa{refl} gets rid of the premise \isa{?k = f ?i}. Again, we could
10.162 -have included this derivation in the original statement of the lemma:
10.163 +The final @{thm[source]refl} gets rid of the premise @{text"?k = f ?i"}. Again,
10.164 +we could have included this derivation in the original statement of the lemma:
10.165 *};
10.166
10.167 lemma f_incr[rulify, OF refl]: "\\<forall>i. k = f i \\<longrightarrow> i \\<le> f i";
10.168 @@ -219,22 +220,23 @@
10.169
10.170 text{*
10.171 \begin{exercise}
10.172 -From the above axiom and lemma for \isa{f} show that \isa{f} is the identity.
10.173 +From the above axiom and lemma for @{term"f"} show that @{term"f"} is the
10.174 +identity.
10.175 \end{exercise}
10.176
10.177 -In general, \isa{induct\_tac} can be applied with any rule \isa{r}
10.178 -whose conclusion is of the form \isa{?P ?x1 \dots ?xn}, in which case the
10.179 +In general, @{text"induct_tac"} can be applied with any rule $r$
10.180 +whose conclusion is of the form ${?}P~?x@1 \dots ?x@n$, in which case the
10.181 format is
10.182 -\begin{ttbox}
10.183 -apply(induct_tac y1 ... yn rule: r)
10.184 -\end{ttbox}\index{*induct_tac}%
10.185 -where \isa{y1}, \dots, \isa{yn} are variables in the first subgoal.
10.186 -In fact, \isa{induct\_tac} even allows the conclusion of
10.187 -\isa{r} to be an (iterated) conjunction of formulae of the above form, in
10.188 +\begin{quote}
10.189 +\isacommand{apply}@{text"(induct_tac"} $y@1 \dots y@n$ @{text"rule:"} $r$@{text")"}
10.190 +\end{quote}\index{*induct_tac}%
10.191 +where $y@1, \dots, y@n$ are variables in the first subgoal.
10.192 +In fact, @{text"induct_tac"} even allows the conclusion of
10.193 +$r$ to be an (iterated) conjunction of formulae of the above form, in
10.194 which case the application is
10.195 -\begin{ttbox}
10.196 -apply(induct_tac y1 ... yn and ... and z1 ... zm rule: r)
10.197 -\end{ttbox}
10.198 +\begin{quote}
10.199 +\isacommand{apply}@{text"(induct_tac"} $y@1 \dots y@n$ @{text"and"} \dots\ @{text"and"} $z@1 \dots z@m$ @{text"rule:"} $r$@{text")"}
10.200 +\end{quote}
10.201 *};
10.202
10.203 subsection{*Derivation of new induction schemas*};
10.204 @@ -242,18 +244,18 @@
10.205 text{*\label{sec:derive-ind}
10.206 Induction schemas are ordinary theorems and you can derive new ones
10.207 whenever you wish. This section shows you how to, using the example
10.208 -of \isa{less\_induct}. Assume we only have structural induction
10.209 +of @{thm[source]less_induct}. Assume we only have structural induction
10.210 available for @{typ"nat"} and want to derive complete induction. This
10.211 requires us to generalize the statement first:
10.212 *};
10.213
10.214 -lemma induct_lem: "(\\<And>n::nat. \\<forall>m<n. P m \\<Longrightarrow> P n) ==> \\<forall>m<n. P m";
10.215 +lemma induct_lem: "(\\<And>n::nat. \\<forall>m<n. P m \\<Longrightarrow> P n) \\<Longrightarrow> \\<forall>m<n. P m";
10.216 apply(induct_tac n);
10.217
10.218 txt{*\noindent
10.219 -The base case is trivially true. For the induction step (@{term"m <
10.220 -Suc n"}) we distinguish two cases: @{term"m < n"} is true by induction
10.221 -hypothesis and @{term"m = n"} follow from the assumption again using
10.222 +The base case is trivially true. For the induction step (@{prop"m <
10.223 +Suc n"}) we distinguish two cases: @{prop"m < n"} is true by induction
10.224 +hypothesis and @{prop"m = n"} follow from the assumption again using
10.225 the induction hypothesis:
10.226 *};
10.227
10.228 @@ -264,31 +266,31 @@
10.229 ML"reset quick_and_dirty"
10.230
10.231 text{*\noindent
10.232 -The elimination rule \isa{less_SucE} expresses the case distinction:
10.233 +The elimination rule @{thm[source]less_SucE} expresses the case distinction:
10.234 \begin{quote}
10.235 @{thm[display]"less_SucE"[no_vars]}
10.236 \end{quote}
10.237
10.238 Now it is straightforward to derive the original version of
10.239 -\isa{less\_induct} by manipulting the conclusion of the above lemma:
10.240 -instantiate \isa{n} by @{term"Suc n"} and \isa{m} by \isa{n} and
10.241 -remove the trivial condition @{term"n < Sc n"}. Fortunately, this
10.242 +@{thm[source]less_induct} by manipulting the conclusion of the above lemma:
10.243 +instantiate @{term"n"} by @{term"Suc n"} and @{term"m"} by @{term"n"} and
10.244 +remove the trivial condition @{prop"n < Sc n"}. Fortunately, this
10.245 happens automatically when we add the lemma as a new premise to the
10.246 desired goal:
10.247 *};
10.248
10.249 -theorem less_induct: "(\\<And>n::nat. \\<forall>m<n. P m \\<Longrightarrow> P n) ==> P n";
10.250 +theorem less_induct: "(\\<And>n::nat. \\<forall>m<n. P m \\<Longrightarrow> P n) \\<Longrightarrow> P n";
10.251 by(insert induct_lem, blast);
10.252
10.253 text{*\noindent
10.254 Finally we should mention that HOL already provides the mother of all
10.255 -inductions, \emph{wellfounded induction} (\isa{wf\_induct}):
10.256 +inductions, \emph{wellfounded induction} (@{thm[source]wf_induct}):
10.257 \begin{quote}
10.258 @{thm[display]"wf_induct"[no_vars]}
10.259 \end{quote}
10.260 -where @{term"wf r"} means that the relation \isa{r} is wellfounded.
10.261 -For example \isa{less\_induct} is the special case where \isa{r} is \isa{<} on @{typ"nat"}.
10.262 -For details see the library.
10.263 +where @{term"wf r"} means that the relation @{term"r"} is wellfounded.
10.264 +For example @{thm[source]less_induct} is the special case where @{term"r"} is
10.265 +@{text"<"} on @{typ"nat"}. For details see the library.
10.266 *};
10.267
10.268 (*<*)
11.1 --- a/doc-src/TutorialI/Misc/Itrev.thy Fri Sep 01 18:29:52 2000 +0200
11.2 +++ b/doc-src/TutorialI/Misc/Itrev.thy Fri Sep 01 19:09:44 2000 +0200
11.3 @@ -4,10 +4,10 @@
11.4
11.5 text{*
11.6 Function @{term"rev"} has quadratic worst-case running time
11.7 -because it calls function @{term"@"} for each element of the list and
11.8 -@{term"@"} is linear in its first argument. A linear time version of
11.9 +because it calls function @{text"@"} for each element of the list and
11.10 +@{text"@"} is linear in its first argument. A linear time version of
11.11 @{term"rev"} reqires an extra argument where the result is accumulated
11.12 -gradually, using only @{name"#"}:
11.13 +gradually, using only @{text"#"}:
11.14 *}
11.15
11.16 consts itrev :: "'a list \\<Rightarrow> 'a list \\<Rightarrow> 'a list";
12.1 --- a/doc-src/TutorialI/Misc/Tree.thy Fri Sep 01 18:29:52 2000 +0200
12.2 +++ b/doc-src/TutorialI/Misc/Tree.thy Fri Sep 01 19:09:44 2000 +0200
12.3 @@ -14,7 +14,7 @@
12.4 "mirror (Node l x r) = Node (mirror r) x (mirror l)";(*>*)
12.5
12.6 text{*\noindent
12.7 -and a function \isa{mirror} that mirrors a binary tree
12.8 +and a function @{term"mirror"} that mirrors a binary tree
12.9 by swapping subtrees (recursively). Prove
12.10 *}
12.11
12.12 @@ -30,7 +30,7 @@
12.13 (*>*)
12.14
12.15 text{*\noindent
12.16 -Define a function \isa{flatten} that flattens a tree into a list
12.17 +Define a function @{term"flatten"} that flattens a tree into a list
12.18 by traversing it in infix order. Prove
12.19 *}
12.20
13.1 --- a/doc-src/TutorialI/Misc/Tree2.thy Fri Sep 01 18:29:52 2000 +0200
13.2 +++ b/doc-src/TutorialI/Misc/Tree2.thy Fri Sep 01 19:09:44 2000 +0200
13.3 @@ -3,9 +3,9 @@
13.4 (*>*)
13.5
13.6 text{*\noindent In Exercise~\ref{ex:Tree} we defined a function
13.7 -\isa{flatten} from trees to lists. The straightforward version of
13.8 -\isa{flatten} is based on \isa{\at} and is thus, like \isa{rev}, quadratic.
13.9 -A linear time version of \isa{flatten} again reqires an extra
13.10 +@{term"flatten"} from trees to lists. The straightforward version of
13.11 +@{term"flatten"} is based on @{text"@"} and is thus, like @{term"rev"},
13.12 +quadratic. A linear time version of @{term"flatten"} again reqires an extra
13.13 argument, the accumulator: *}
13.14
13.15 consts flatten2 :: "'a tree => 'a list => 'a list"
13.16 @@ -15,7 +15,7 @@
13.17 "flatten2 (Node l x r) xs = flatten2 l (x#(flatten2 r xs))"
13.18 (*>*)
13.19
13.20 -text{*\noindent Define \isa{flatten2} and prove
13.21 +text{*\noindent Define @{term"flatten2"} and prove
13.22 *}
13.23 (*<*)
13.24 lemma [simp]: "!xs. flatten2 t xs = flatten t @ xs";
14.1 --- a/doc-src/TutorialI/Misc/asm_simp.thy Fri Sep 01 18:29:52 2000 +0200
14.2 +++ b/doc-src/TutorialI/Misc/asm_simp.thy Fri Sep 01 19:09:44 2000 +0200
14.3 @@ -12,7 +12,7 @@
14.4 text{*\noindent
14.5 The second assumption simplifies to @{term"xs = []"}, which in turn
14.6 simplifies the first assumption to @{term"zs = ys"}, thus reducing the
14.7 -conclusion to @{term"ys = ys"} and hence to \isa{True}.
14.8 +conclusion to @{term"ys = ys"} and hence to @{term"True"}.
14.9
14.10 In some cases this may be too much of a good thing and may lead to
14.11 nontermination:
14.12 @@ -21,7 +21,7 @@
14.13 lemma "\\<forall>x. f x = g (f (g x)) \\<Longrightarrow> f [] = f [] @ []";
14.14
14.15 txt{*\noindent
14.16 -cannot be solved by an unmodified application of \isa{simp} because the
14.17 +cannot be solved by an unmodified application of @{text"simp"} because the
14.18 simplification rule @{term"f x = g (f (g x))"} extracted from the assumption
14.19 does not terminate. Isabelle notices certain simple forms of
14.20 nontermination but not this one. The problem can be circumvented by
14.21 @@ -33,17 +33,17 @@
14.22 text{*\noindent
14.23 There are three options that influence the treatment of assumptions:
14.24 \begin{description}
14.25 -\item[\isa{(no_asm)}]\indexbold{*no_asm}
14.26 +\item[@{text"(no_asm)"}]\indexbold{*no_asm}
14.27 means that assumptions are completely ignored.
14.28 -\item[\isa{(no_asm_simp)}]\indexbold{*no_asm_simp}
14.29 +\item[@{text"(no_asm_simp)"}]\indexbold{*no_asm_simp}
14.30 means that the assumptions are not simplified but
14.31 are used in the simplification of the conclusion.
14.32 -\item[\isa{(no_asm_use)}]\indexbold{*no_asm_use}
14.33 +\item[@{text"(no_asm_use)"}]\indexbold{*no_asm_use}
14.34 means that the assumptions are simplified but are not
14.35 used in the simplification of each other or the conclusion.
14.36 \end{description}
14.37 -Neither \isa{(no_asm_simp)} nor \isa{(no_asm_use)} allow to simplify the above
14.38 -problematic subgoal.
14.39 +Neither @{text"(no_asm_simp)"} nor @{text"(no_asm_use)"} allow to simplify
14.40 +the above problematic subgoal.
14.41
14.42 Note that only one of the above options is allowed, and it must precede all
14.43 other arguments.
15.1 --- a/doc-src/TutorialI/Misc/case_exprs.thy Fri Sep 01 18:29:52 2000 +0200
15.2 +++ b/doc-src/TutorialI/Misc/case_exprs.thy Fri Sep 01 19:09:44 2000 +0200
15.3 @@ -17,10 +17,10 @@
15.4
15.5 In general, if $e$ is a term of the datatype $t$ defined in
15.6 \S\ref{sec:general-datatype} above, the corresponding
15.7 -\isa{case}-expression analyzing $e$ is
15.8 +@{text"case"}-expression analyzing $e$ is
15.9 \[
15.10 \begin{array}{rrcl}
15.11 -\isa{case}~e~\isa{of} & C@1~x@ {11}~\dots~x@ {1k@1} & \To & e@1 \\
15.12 +@{text"case"}~e~@{text"of"} & C@1~x@ {11}~\dots~x@ {1k@1} & \To & e@1 \\
15.13 \vdots \\
15.14 \mid & C@m~x@ {m1}~\dots~x@ {mk@m} & \To & e@m
15.15 \end{array}
15.16 @@ -32,18 +32,19 @@
15.17 error messages.
15.18 \end{warn}
15.19 \noindent
15.20 -Nested patterns can be simulated by nested \isa{case}-expressions: instead
15.21 +Nested patterns can be simulated by nested @{text"case"}-expressions: instead
15.22 of
15.23 -% case xs of [] => 1 | [x] => x | x#(y#zs) => y
15.24 -\begin{isabelle}
15.25 -~~~case~xs~of~[]~{\isasymRightarrow}~1~|~[x]~{\isasymRightarrow}~x~|~x~\#~y~\#~zs~{\isasymRightarrow}~y
15.26 -\end{isabelle}
15.27 +%
15.28 +\begin{quote}
15.29 +@{text"case xs of [] => 1 | [x] => x | x#(y#zs) => y"}
15.30 +%~~~case~xs~of~[]~{\isasymRightarrow}~1~|~[x]~{\isasymRightarrow}~x~|~x~\#~y~\#~zs~{\isasymRightarrow}~y
15.31 +\end{quote}
15.32 write
15.33 \begin{quote}
15.34 @{term[display,eta_contract=false,margin=50]"case xs of [] => 1 | x#ys => (case ys of [] => x | y#zs => y)"}
15.35 \end{quote}
15.36
15.37 -Note that \isa{case}-expressions may need to be enclosed in parentheses to
15.38 +Note that @{text"case"}-expressions may need to be enclosed in parentheses to
15.39 indicate their scope
15.40 *}
15.41
16.1 --- a/doc-src/TutorialI/Misc/case_splits.thy Fri Sep 01 18:29:52 2000 +0200
16.2 +++ b/doc-src/TutorialI/Misc/case_splits.thy Fri Sep 01 19:09:44 2000 +0200
16.3 @@ -3,8 +3,8 @@
16.4 (*>*)
16.5
16.6 text{*
16.7 -Goals containing \isaindex{if}-expressions are usually proved by case
16.8 -distinction on the condition of the \isa{if}. For example the goal
16.9 +Goals containing @{text"if"}-expressions are usually proved by case
16.10 +distinction on the condition of the @{text"if"}. For example the goal
16.11 *}
16.12
16.13 lemma "\\<forall>xs. if xs = [] then rev xs = [] else rev xs \\<noteq> []";
16.14 @@ -12,7 +12,7 @@
16.15 txt{*\noindent
16.16 can be split into
16.17 \begin{isabelle}
16.18 -~1.~{\isasymforall}xs.~(xs~=~[]~{\isasymlongrightarrow}~rev~xs~=~[])~{\isasymand}~(xs~{\isasymnoteq}~[]~{\isasymlongrightarrow}~rev~xs~{\isasymnoteq}~[])%
16.19 +~1.~{\isasymforall}xs.~(xs~=~[]~{\isasymlongrightarrow}~rev~xs~=~[])~{\isasymand}~(xs~{\isasymnoteq}~[]~{\isasymlongrightarrow}~rev~xs~{\isasymnoteq}~[])
16.20 \end{isabelle}
16.21 by a degenerate form of simplification
16.22 *}
16.23 @@ -21,14 +21,14 @@
16.24 (*<*)oops;(*>*)
16.25
16.26 text{*\noindent
16.27 -where no simplification rules are included (\isa{only:} is followed by the
16.28 +where no simplification rules are included (@{text"only:"} is followed by the
16.29 empty list of theorems) but the rule \isaindexbold{split_if} for
16.30 -splitting \isa{if}s is added (via the modifier \isa{split:}). Because
16.31 -case-splitting on \isa{if}s is almost always the right proof strategy, the
16.32 -simplifier performs it automatically. Try \isacommand{apply}\isa{(simp)}
16.33 +splitting @{text"if"}s is added (via the modifier @{text"split:"}). Because
16.34 +case-splitting on @{text"if"}s is almost always the right proof strategy, the
16.35 +simplifier performs it automatically. Try \isacommand{apply}@{text"(simp)"}
16.36 on the initial goal above.
16.37
16.38 -This splitting idea generalizes from \isa{if} to \isaindex{case}:
16.39 +This splitting idea generalizes from @{text"if"} to \isaindex{case}:
16.40 *}
16.41
16.42 lemma "(case xs of [] \\<Rightarrow> zs | y#ys \\<Rightarrow> y#(ys@zs)) = xs@zs";
16.43 @@ -36,7 +36,7 @@
16.44 becomes
16.45 \begin{isabelle}
16.46 ~1.~(xs~=~[]~{\isasymlongrightarrow}~zs~=~xs~@~zs)~{\isasymand}\isanewline
16.47 -~~~~({\isasymforall}a~list.~xs~=~a~\#~list~{\isasymlongrightarrow}~a~\#~list~@~zs~=~xs~@~zs)%
16.48 +~~~~({\isasymforall}a~list.~xs~=~a~\#~list~{\isasymlongrightarrow}~a~\#~list~@~zs~=~xs~@~zs)
16.49 \end{isabelle}
16.50 by typing
16.51 *}
16.52 @@ -45,9 +45,9 @@
16.53 (*<*)oops;(*>*)
16.54
16.55 text{*\noindent
16.56 -In contrast to \isa{if}-expressions, the simplifier does not split
16.57 -\isa{case}-expressions by default because this can lead to nontermination
16.58 -in case of recursive datatypes. Again, if the \isa{only:} modifier is
16.59 +In contrast to @{text"if"}-expressions, the simplifier does not split
16.60 +@{text"case"}-expressions by default because this can lead to nontermination
16.61 +in case of recursive datatypes. Again, if the @{text"only:"} modifier is
16.62 dropped, the above goal is solved,
16.63 *}
16.64 (*<*)
16.65 @@ -56,17 +56,17 @@
16.66 by(simp split: list.split);
16.67
16.68 text{*\noindent%
16.69 -which \isacommand{apply}\isa{(simp)} alone will not do.
16.70 +which \isacommand{apply}@{text"(simp)"} alone will not do.
16.71
16.72 In general, every datatype $t$ comes with a theorem
16.73 -\isa{$t$.split} which can be declared to be a \bfindex{split rule} either
16.74 -locally as above, or by giving it the \isa{split} attribute globally:
16.75 +$t$@{text".split"} which can be declared to be a \bfindex{split rule} either
16.76 +locally as above, or by giving it the @{text"split"} attribute globally:
16.77 *}
16.78
16.79 lemmas [split] = list.split;
16.80
16.81 text{*\noindent
16.82 -The \isa{split} attribute can be removed with the \isa{del} modifier,
16.83 +The @{text"split"} attribute can be removed with the @{text"del"} modifier,
16.84 either locally
16.85 *}
16.86 (*<*)
16.87 @@ -83,8 +83,9 @@
16.88
16.89 text{*
16.90 The above split rules intentionally only affect the conclusion of a
16.91 -subgoal. If you want to split an \isa{if} or \isa{case}-expression in
16.92 -the assumptions, you have to apply \isa{split\_if\_asm} or $t$\isa{.split_asm}:
16.93 +subgoal. If you want to split an @{text"if"} or @{text"case"}-expression in
16.94 +the assumptions, you have to apply @{thm[source]split_if_asm} or
16.95 +$t$@{text".split_asm"}:
16.96 *}
16.97
16.98 lemma "if xs = [] then ys ~= [] else ys = [] ==> xs @ ys ~= []"
16.99 @@ -92,13 +93,14 @@
16.100
16.101 txt{*\noindent
16.102 In contrast to splitting the conclusion, this actually creates two
16.103 -separate subgoals (which are solved by \isa{simp\_all}):
16.104 +separate subgoals (which are solved by @{text"simp_all"}):
16.105 \begin{isabelle}
16.106 \ \isadigit{1}{\isachardot}\ {\isasymlbrakk}\mbox{xs}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharsemicolon}\ \mbox{ys}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isasymrbrakk}\ {\isasymLongrightarrow}\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharat}\ \mbox{ys}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}\isanewline
16.107 \ \isadigit{2}{\isachardot}\ {\isasymlbrakk}\mbox{xs}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isacharsemicolon}\ \mbox{ys}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{xs}\ {\isacharat}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}
16.108 \end{isabelle}
16.109 If you need to split both in the assumptions and the conclusion,
16.110 -use $t$\isa{.splits} which subsumes $t$\isa{.split} and $t$\isa{.split_asm}.
16.111 +use $t$@{text".splits"} which subsumes $t$@{text".split"} and
16.112 +$t$@{text".split_asm"}. Analogously, there is @{thm[source]if_splits}.
16.113 *}
16.114
16.115 (*<*)
17.1 --- a/doc-src/TutorialI/Misc/cond_rewr.thy Fri Sep 01 18:29:52 2000 +0200
17.2 +++ b/doc-src/TutorialI/Misc/cond_rewr.thy Fri Sep 01 19:09:44 2000 +0200
17.3 @@ -23,10 +23,10 @@
17.4 (*>*)
17.5 text{*\noindent
17.6 is proved by plain simplification:
17.7 -the conditional equation \isa{hd_Cons_tl} above
17.8 -can simplify \isa{hd(rev~xs)~\#~tl(rev~xs)} to \isa{rev xs}
17.9 -because the corresponding precondition \isa{rev xs \isasymnoteq\ []}
17.10 -simplifies to \isa{xs \isasymnoteq\ []}, which is exactly the local
17.11 +the conditional equation @{thm[source]hd_Cons_tl} above
17.12 +can simplify @{term"hd(rev xs) # tl(rev xs)"} to @{term"rev xs"}
17.13 +because the corresponding precondition @{term"rev xs ~= []"}
17.14 +simplifies to @{term"xs ~= []"}, which is exactly the local
17.15 assumption of the subgoal.
17.16 *}
17.17 (*<*)
18.1 --- a/doc-src/TutorialI/Misc/document/AdvancedInd.tex Fri Sep 01 18:29:52 2000 +0200
18.2 +++ b/doc-src/TutorialI/Misc/document/AdvancedInd.tex Fri Sep 01 19:09:44 2000 +0200
18.3 @@ -35,7 +35,7 @@
18.4 \ 1.\ xs\ {\isasymnoteq}\ []\ {\isasymLongrightarrow}\ hd\ []\ =\ last\ []
18.5 \end{isabelle}
18.6 We cannot prove this equality because we do not know what \isa{hd} and
18.7 -\isa{last} return when applied to \isa{[]}.
18.8 +\isa{last} return when applied to \isa{{\isacharbrackleft}{\isacharbrackright}}.
18.9
18.10 The point is that we have violated the above warning. Because the induction
18.11 formula is only the conclusion, the occurrence of \isa{xs} in the premises is
18.12 @@ -43,7 +43,7 @@
18.13 becomes unprovable. Fortunately, the solution is easy:
18.14 \begin{quote}
18.15 \emph{Pull all occurrences of the induction variable into the conclusion
18.16 -using \isa{\isasymlongrightarrow}.}
18.17 +using \isa{{\isasymlongrightarrow}}.}
18.18 \end{quote}
18.19 This means we should prove%
18.20 \end{isamarkuptxt}%
18.21 @@ -56,10 +56,11 @@
18.22 \end{isabelle}
18.23 which is trivial, and \isa{auto} finishes the whole proof.
18.24
18.25 -If \isa{hd\_rev} is meant to be a simplification rule, you are done. But if you
18.26 -really need the \isa{\isasymLongrightarrow}-version of \isa{hd\_rev}, for
18.27 -example because you want to apply it as an introduction rule, you need to
18.28 -derive it separately, by combining it with modus ponens:%
18.29 +If \isa{hd{\isacharunderscore}rev} is meant to be a simplification rule, you are
18.30 +done. But if you really need the \isa{{\isasymLongrightarrow}}-version of
18.31 +\isa{hd{\isacharunderscore}rev}, for example because you want to apply it as an
18.32 +introduction rule, you need to derive it separately, by combining it with
18.33 +modus ponens:%
18.34 \end{isamarkuptext}%
18.35 \isacommand{lemmas}\ hd{\isacharunderscore}revI\ {\isacharequal}\ hd{\isacharunderscore}rev{\isacharbrackleft}THEN\ mp{\isacharbrackright}%
18.36 \begin{isamarkuptext}%
18.37 @@ -83,13 +84,13 @@
18.38 \isacommand{lemmas}\ myrule\ {\isacharequal}\ simple{\isacharbrackleft}THEN\ spec{\isacharcomma}\ THEN\ mp{\isacharcomma}\ THEN\ mp{\isacharbrackright}%
18.39 \begin{isamarkuptext}%
18.40 \noindent
18.41 -or the wholesale stripping of \isa{\isasymforall} and
18.42 -\isa{\isasymlongrightarrow} in the conclusion via \isa{rulify}%
18.43 +or the wholesale stripping of \isa{{\isasymforall}} and
18.44 +\isa{{\isasymlongrightarrow}} in the conclusion via \isa{rulify}%
18.45 \end{isamarkuptext}%
18.46 \isacommand{lemmas}\ myrule\ {\isacharequal}\ simple{\isacharbrackleft}rulify{\isacharbrackright}%
18.47 \begin{isamarkuptext}%
18.48 \noindent
18.49 -yielding \isa{{\isasymlbrakk}\mbox{A}\ \mbox{y}{\isacharsemicolon}\ \mbox{B}\ \mbox{y}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{B}\ \mbox{y}\ {\isasymand}\ \mbox{A}\ \mbox{y}}.
18.50 +yielding \isa{{\isasymlbrakk}A\ y{\isacharsemicolon}\ B\ y{\isasymrbrakk}\ {\isasymLongrightarrow}\ B\ y\ {\isasymand}\ A\ y}.
18.51 You can go one step further and include these derivations already in the
18.52 statement of your original lemma, thus avoiding the intermediate step:%
18.53 \end{isamarkuptext}%
18.54 @@ -118,11 +119,11 @@
18.55 Structural induction on \isa{nat} is
18.56 usually known as ``mathematical induction''. There is also ``complete
18.57 induction'', where you must prove $P(n)$ under the assumption that $P(m)$
18.58 -holds for all $m<n$. In Isabelle, this is the theorem \isa{less\_induct}:
18.59 +holds for all $m<n$. In Isabelle, this is the theorem \isa{less{\isacharunderscore}induct}:
18.60 \begin{quote}
18.61
18.62 \begin{isabelle}%
18.63 -{\isacharparenleft}{\isasymAnd}\mbox{n}{\isachardot}\ {\isasymforall}\mbox{m}{\isachardot}\ \mbox{m}\ {\isacharless}\ \mbox{n}\ {\isasymlongrightarrow}\ \mbox{P}\ \mbox{m}\ {\isasymLongrightarrow}\ \mbox{P}\ \mbox{n}{\isacharparenright}\ {\isasymLongrightarrow}\ \mbox{P}\ \mbox{n}
18.64 +{\isacharparenleft}{\isasymAnd}n{\isachardot}\ {\isasymforall}m{\isachardot}\ m\ {\isacharless}\ n\ {\isasymlongrightarrow}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ P\ n
18.65 \end{isabelle}%
18.66
18.67 \end{quote}
18.68 @@ -136,14 +137,14 @@
18.69 discouraged, because of the danger of inconsistencies. The above axiom does
18.70 not introduce an inconsistency because, for example, the identity function
18.71 satisfies it.}
18.72 -for \isa{f} it follows that \isa{\mbox{n}\ {\isasymle}\ f\ \mbox{n}}, which can
18.73 -be proved by induction on \isa{f\ \mbox{n}}. Following the recipy outlined
18.74 +for \isa{f} it follows that \isa{n\ {\isasymle}\ f\ n}, which can
18.75 +be proved by induction on \isa{f\ n}. Following the recipy outlined
18.76 above, we have to phrase the proposition as follows to allow induction:%
18.77 \end{isamarkuptext}%
18.78 \isacommand{lemma}\ f{\isacharunderscore}incr{\isacharunderscore}lem{\isacharcolon}\ {\isachardoublequote}{\isasymforall}i{\isachardot}\ k\ {\isacharequal}\ f\ i\ {\isasymlongrightarrow}\ i\ {\isasymle}\ f\ i{\isachardoublequote}%
18.79 \begin{isamarkuptxt}%
18.80 \noindent
18.81 -To perform induction on \isa{k} using \isa{less\_induct}, we use the same
18.82 +To perform induction on \isa{k} using \isa{less{\isacharunderscore}induct}, we use the same
18.83 general induction method as for recursion induction (see
18.84 \S\ref{sec:recdef-induction}):%
18.85 \end{isamarkuptxt}%
18.86 @@ -155,9 +156,9 @@
18.87 \ 1.\ {\isasymAnd}\mbox{n}.\ {\isasymforall}\mbox{m}.\ \mbox{m}\ <\ \mbox{n}\ {\isasymlongrightarrow}\ ({\isasymforall}\mbox{i}.\ \mbox{m}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i})\isanewline
18.88 \ \ \ \ \ \ \ {\isasymLongrightarrow}\ {\isasymforall}\mbox{i}.\ \mbox{n}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i}
18.89 \end{isabelle}
18.90 -After stripping the \isa{\isasymforall i}, the proof continues with a case
18.91 -distinction on \isa{i}. The case \isa{i = 0} is trivial and we focus on the
18.92 -other case:
18.93 +After stripping the \isa{{\isasymforall}i}, the proof continues with a case
18.94 +distinction on \isa{i}. The case \isa{i\ {\isacharequal}\ \isadigit{0}} is trivial and we focus on
18.95 +the other case:
18.96 \begin{isabelle}
18.97 \ 1.\ {\isasymAnd}\mbox{n}\ \mbox{i}\ \mbox{nat}.\isanewline
18.98 \ \ \ \ \ \ \ {\isasymlbrakk}{\isasymforall}\mbox{m}.\ \mbox{m}\ <\ \mbox{n}\ {\isasymlongrightarrow}\ ({\isasymforall}\mbox{i}.\ \mbox{m}\ =\ f\ \mbox{i}\ {\isasymlongrightarrow}\ \mbox{i}\ {\isasymle}\ f\ \mbox{i});\ \mbox{i}\ =\ Suc\ \mbox{nat}{\isasymrbrakk}\isanewline
18.99 @@ -169,15 +170,15 @@
18.100 \noindent
18.101 It is not surprising if you find the last step puzzling.
18.102 The proof goes like this (writing \isa{j} instead of \isa{nat}).
18.103 -Since \isa{\mbox{i}\ {\isacharequal}\ Suc\ \mbox{j}} it suffices to show
18.104 -\isa{\mbox{j}\ {\isacharless}\ f\ {\isacharparenleft}Suc\ \mbox{j}{\isacharparenright}} (by \isa{Suc\_leI}: \isa{\mbox{m}\ {\isacharless}\ \mbox{n}\ {\isasymLongrightarrow}\ Suc\ \mbox{m}\ {\isasymle}\ \mbox{n}}). This is
18.105 -proved as follows. From \isa{f\_ax} we have \isa{f\ {\isacharparenleft}f\ \mbox{j}{\isacharparenright}\ {\isacharless}\ f\ {\isacharparenleft}Suc\ \mbox{j}{\isacharparenright}}
18.106 -(1) which implies \isa{f\ \mbox{j}\ {\isasymle}\ f\ {\isacharparenleft}f\ \mbox{j}{\isacharparenright}} (by the induction hypothesis).
18.107 -Using (1) once more we obtain \isa{f\ \mbox{j}\ {\isacharless}\ f\ {\isacharparenleft}Suc\ \mbox{j}{\isacharparenright}} (2) by transitivity
18.108 -(\isa{le_less_trans}: \isa{{\isasymlbrakk}\mbox{i}\ {\isasymle}\ \mbox{j}{\isacharsemicolon}\ \mbox{j}\ {\isacharless}\ \mbox{k}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{i}\ {\isacharless}\ \mbox{k}}).
18.109 -Using the induction hypothesis once more we obtain \isa{\mbox{j}\ {\isasymle}\ f\ \mbox{j}}
18.110 -which, together with (2) yields \isa{\mbox{j}\ {\isacharless}\ f\ {\isacharparenleft}Suc\ \mbox{j}{\isacharparenright}} (again by
18.111 -\isa{le_less_trans}).
18.112 +Since \isa{i\ {\isacharequal}\ Suc\ j} it suffices to show
18.113 +\isa{j\ {\isacharless}\ f\ {\isacharparenleft}Suc\ j{\isacharparenright}} (by \isa{Suc{\isacharunderscore}leI}: \isa{m\ {\isacharless}\ n\ {\isasymLongrightarrow}\ Suc\ m\ {\isasymle}\ n}). This is
18.114 +proved as follows. From \isa{f{\isacharunderscore}ax} we have \isa{f\ {\isacharparenleft}f\ j{\isacharparenright}\ {\isacharless}\ f\ {\isacharparenleft}Suc\ j{\isacharparenright}}
18.115 +(1) which implies \isa{f\ j\ {\isasymle}\ f\ {\isacharparenleft}f\ j{\isacharparenright}} (by the induction hypothesis).
18.116 +Using (1) once more we obtain \isa{f\ j\ {\isacharless}\ f\ {\isacharparenleft}Suc\ j{\isacharparenright}} (2) by transitivity
18.117 +(\isa{le{\isacharunderscore}less{\isacharunderscore}trans}: \isa{{\isasymlbrakk}i\ {\isasymle}\ j{\isacharsemicolon}\ j\ {\isacharless}\ k{\isasymrbrakk}\ {\isasymLongrightarrow}\ i\ {\isacharless}\ k}).
18.118 +Using the induction hypothesis once more we obtain \isa{j\ {\isasymle}\ f\ j}
18.119 +which, together with (2) yields \isa{j\ {\isacharless}\ f\ {\isacharparenleft}Suc\ j{\isacharparenright}} (again by
18.120 +\isa{le{\isacharunderscore}less{\isacharunderscore}trans}).
18.121
18.122 This last step shows both the power and the danger of automatic proofs: they
18.123 will usually not tell you how the proof goes, because it can be very hard to
18.124 @@ -185,33 +186,34 @@
18.125 \S\ref{sec:part2?} introduces a language for writing readable yet concise
18.126 proofs.
18.127
18.128 -We can now derive the desired \isa{\mbox{i}\ {\isasymle}\ f\ \mbox{i}} from \isa{f\_incr}:%
18.129 +We can now derive the desired \isa{i\ {\isasymle}\ f\ i} from \isa{f{\isacharunderscore}incr}:%
18.130 \end{isamarkuptext}%
18.131 \isacommand{lemmas}\ f{\isacharunderscore}incr\ {\isacharequal}\ f{\isacharunderscore}incr{\isacharunderscore}lem{\isacharbrackleft}rulify{\isacharcomma}\ OF\ refl{\isacharbrackright}%
18.132 \begin{isamarkuptext}%
18.133 \noindent
18.134 -The final \isa{refl} gets rid of the premise \isa{?k = f ?i}. Again, we could
18.135 -have included this derivation in the original statement of the lemma:%
18.136 +The final \isa{refl} gets rid of the premise \isa{{\isacharquery}k\ {\isacharequal}\ f\ {\isacharquery}i}. Again,
18.137 +we could have included this derivation in the original statement of the lemma:%
18.138 \end{isamarkuptext}%
18.139 \isacommand{lemma}\ f{\isacharunderscore}incr{\isacharbrackleft}rulify{\isacharcomma}\ OF\ refl{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}{\isasymforall}i{\isachardot}\ k\ {\isacharequal}\ f\ i\ {\isasymlongrightarrow}\ i\ {\isasymle}\ f\ i{\isachardoublequote}%
18.140 \begin{isamarkuptext}%
18.141 \begin{exercise}
18.142 -From the above axiom and lemma for \isa{f} show that \isa{f} is the identity.
18.143 +From the above axiom and lemma for \isa{f} show that \isa{f} is the
18.144 +identity.
18.145 \end{exercise}
18.146
18.147 -In general, \isa{induct\_tac} can be applied with any rule \isa{r}
18.148 -whose conclusion is of the form \isa{?P ?x1 \dots ?xn}, in which case the
18.149 +In general, \isa{induct{\isacharunderscore}tac} can be applied with any rule $r$
18.150 +whose conclusion is of the form ${?}P~?x@1 \dots ?x@n$, in which case the
18.151 format is
18.152 -\begin{ttbox}
18.153 -apply(induct_tac y1 ... yn rule: r)
18.154 -\end{ttbox}\index{*induct_tac}%
18.155 -where \isa{y1}, \dots, \isa{yn} are variables in the first subgoal.
18.156 -In fact, \isa{induct\_tac} even allows the conclusion of
18.157 -\isa{r} to be an (iterated) conjunction of formulae of the above form, in
18.158 +\begin{quote}
18.159 +\isacommand{apply}\isa{{\isacharparenleft}induct{\isacharunderscore}tac} $y@1 \dots y@n$ \isa{rule{\isacharcolon}} $r$\isa{{\isacharparenright}}
18.160 +\end{quote}\index{*induct_tac}%
18.161 +where $y@1, \dots, y@n$ are variables in the first subgoal.
18.162 +In fact, \isa{induct{\isacharunderscore}tac} even allows the conclusion of
18.163 +$r$ to be an (iterated) conjunction of formulae of the above form, in
18.164 which case the application is
18.165 -\begin{ttbox}
18.166 -apply(induct_tac y1 ... yn and ... and z1 ... zm rule: r)
18.167 -\end{ttbox}%
18.168 +\begin{quote}
18.169 +\isacommand{apply}\isa{{\isacharparenleft}induct{\isacharunderscore}tac} $y@1 \dots y@n$ \isa{and} \dots\ \isa{and} $z@1 \dots z@m$ \isa{rule{\isacharcolon}} $r$\isa{{\isacharparenright}}
18.170 +\end{quote}%
18.171 \end{isamarkuptext}%
18.172 %
18.173 \isamarkupsubsection{Derivation of new induction schemas}
18.174 @@ -220,16 +222,16 @@
18.175 \label{sec:derive-ind}
18.176 Induction schemas are ordinary theorems and you can derive new ones
18.177 whenever you wish. This section shows you how to, using the example
18.178 -of \isa{less\_induct}. Assume we only have structural induction
18.179 +of \isa{less{\isacharunderscore}induct}. Assume we only have structural induction
18.180 available for \isa{nat} and want to derive complete induction. This
18.181 requires us to generalize the statement first:%
18.182 \end{isamarkuptext}%
18.183 -\isacommand{lemma}\ induct{\isacharunderscore}lem{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isacharequal}{\isacharequal}{\isachargreater}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m{\isachardoublequote}\isanewline
18.184 +\isacommand{lemma}\ induct{\isacharunderscore}lem{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m{\isachardoublequote}\isanewline
18.185 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ n{\isacharparenright}%
18.186 \begin{isamarkuptxt}%
18.187 \noindent
18.188 -The base case is trivially true. For the induction step (\isa{\mbox{m}\ {\isacharless}\ Suc\ \mbox{n}}) we distinguish two cases: \isa{\mbox{m}\ {\isacharless}\ \mbox{n}} is true by induction
18.189 -hypothesis and \isa{\mbox{m}\ {\isacharequal}\ \mbox{n}} follow from the assumption again using
18.190 +The base case is trivially true. For the induction step (\isa{m\ {\isacharless}\ Suc\ n}) we distinguish two cases: \isa{m\ {\isacharless}\ n} is true by induction
18.191 +hypothesis and \isa{m\ {\isacharequal}\ n} follow from the assumption again using
18.192 the induction hypothesis:%
18.193 \end{isamarkuptxt}%
18.194 \isacommand{apply}{\isacharparenleft}blast{\isacharparenright}\isanewline
18.195 @@ -239,38 +241,38 @@
18.196 \isacommand{ML}{\isachardoublequote}reset\ quick{\isacharunderscore}and{\isacharunderscore}dirty{\isachardoublequote}%
18.197 \begin{isamarkuptext}%
18.198 \noindent
18.199 -The elimination rule \isa{less_SucE} expresses the case distinction:
18.200 +The elimination rule \isa{less{\isacharunderscore}SucE} expresses the case distinction:
18.201 \begin{quote}
18.202
18.203 \begin{isabelle}%
18.204 -{\isasymlbrakk}\mbox{m}\ {\isacharless}\ Suc\ \mbox{n}{\isacharsemicolon}\ \mbox{m}\ {\isacharless}\ \mbox{n}\ {\isasymLongrightarrow}\ \mbox{P}{\isacharsemicolon}\ \mbox{m}\ {\isacharequal}\ \mbox{n}\ {\isasymLongrightarrow}\ \mbox{P}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{P}
18.205 +{\isasymlbrakk}m\ {\isacharless}\ Suc\ n{\isacharsemicolon}\ m\ {\isacharless}\ n\ {\isasymLongrightarrow}\ P{\isacharsemicolon}\ m\ {\isacharequal}\ n\ {\isasymLongrightarrow}\ P{\isasymrbrakk}\ {\isasymLongrightarrow}\ P
18.206 \end{isabelle}%
18.207
18.208 \end{quote}
18.209
18.210 Now it is straightforward to derive the original version of
18.211 -\isa{less\_induct} by manipulting the conclusion of the above lemma:
18.212 -instantiate \isa{n} by \isa{Suc\ \mbox{n}} and \isa{m} by \isa{n} and
18.213 -remove the trivial condition \isa{\mbox{n}\ {\isacharless}\ \mbox{Sc}\ \mbox{n}}. Fortunately, this
18.214 +\isa{less{\isacharunderscore}induct} by manipulting the conclusion of the above lemma:
18.215 +instantiate \isa{n} by \isa{Suc\ n} and \isa{m} by \isa{n} and
18.216 +remove the trivial condition \isa{n\ {\isacharless}\ Sc\ n}. Fortunately, this
18.217 happens automatically when we add the lemma as a new premise to the
18.218 desired goal:%
18.219 \end{isamarkuptext}%
18.220 -\isacommand{theorem}\ less{\isacharunderscore}induct{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isacharequal}{\isacharequal}{\isachargreater}\ P\ n{\isachardoublequote}\isanewline
18.221 +\isacommand{theorem}\ less{\isacharunderscore}induct{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isasymAnd}n{\isacharcolon}{\isacharcolon}nat{\isachardot}\ {\isasymforall}m{\isacharless}n{\isachardot}\ P\ m\ {\isasymLongrightarrow}\ P\ n{\isacharparenright}\ {\isasymLongrightarrow}\ P\ n{\isachardoublequote}\isanewline
18.222 \isacommand{by}{\isacharparenleft}insert\ induct{\isacharunderscore}lem{\isacharcomma}\ blast{\isacharparenright}%
18.223 \begin{isamarkuptext}%
18.224 \noindent
18.225 Finally we should mention that HOL already provides the mother of all
18.226 -inductions, \emph{wellfounded induction} (\isa{wf\_induct}):
18.227 +inductions, \emph{wellfounded induction} (\isa{wf{\isacharunderscore}induct}):
18.228 \begin{quote}
18.229
18.230 \begin{isabelle}%
18.231 -{\isasymlbrakk}wf\ \mbox{r}{\isacharsemicolon}\ {\isasymAnd}\mbox{x}{\isachardot}\ {\isasymforall}\mbox{y}{\isachardot}\ {\isacharparenleft}\mbox{y}{\isacharcomma}\ \mbox{x}{\isacharparenright}\ {\isasymin}\ \mbox{r}\ {\isasymlongrightarrow}\ \mbox{P}\ \mbox{y}\ {\isasymLongrightarrow}\ \mbox{P}\ \mbox{x}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{P}\ \mbox{a}
18.232 +{\isasymlbrakk}wf\ r{\isacharsemicolon}\ {\isasymAnd}x{\isachardot}\ {\isasymforall}y{\isachardot}\ {\isacharparenleft}y{\isacharcomma}\ x{\isacharparenright}\ {\isasymin}\ r\ {\isasymlongrightarrow}\ P\ y\ {\isasymLongrightarrow}\ P\ x{\isasymrbrakk}\ {\isasymLongrightarrow}\ P\ a
18.233 \end{isabelle}%
18.234
18.235 \end{quote}
18.236 -where \isa{wf\ \mbox{r}} means that the relation \isa{r} is wellfounded.
18.237 -For example \isa{less\_induct} is the special case where \isa{r} is \isa{<} on \isa{nat}.
18.238 -For details see the library.%
18.239 +where \isa{wf\ r} means that the relation \isa{r} is wellfounded.
18.240 +For example \isa{less{\isacharunderscore}induct} is the special case where \isa{r} is
18.241 +\isa{{\isacharless}} on \isa{nat}. For details see the library.%
18.242 \end{isamarkuptext}%
18.243 \end{isabellebody}%
18.244 %%% Local Variables:
19.1 --- a/doc-src/TutorialI/Misc/document/Itrev.tex Fri Sep 01 18:29:52 2000 +0200
19.2 +++ b/doc-src/TutorialI/Misc/document/Itrev.tex Fri Sep 01 19:09:44 2000 +0200
19.3 @@ -3,17 +3,18 @@
19.4 %
19.5 \begin{isamarkuptext}%
19.6 Function \isa{rev} has quadratic worst-case running time
19.7 -because it calls function \isa{\at} for each element of the list and
19.8 -\isa{\at} is linear in its first argument. A linear time version of
19.9 +because it calls function \isa{{\isacharat}} for each element of the list and
19.10 +\isa{{\isacharat}} is linear in its first argument. A linear time version of
19.11 \isa{rev} reqires an extra argument where the result is accumulated
19.12 -gradually, using only \isa{\#}:%
19.13 +gradually, using only \isa{{\isacharhash}}:%
19.14 \end{isamarkuptext}%
19.15 \isacommand{consts}\ itrev\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a\ list{\isachardoublequote}\isanewline
19.16 \isacommand{primrec}\isanewline
19.17 {\isachardoublequote}itrev\ {\isacharbrackleft}{\isacharbrackright}\ \ \ \ \ ys\ {\isacharequal}\ ys{\isachardoublequote}\isanewline
19.18 {\isachardoublequote}itrev\ {\isacharparenleft}x{\isacharhash}xs{\isacharparenright}\ ys\ {\isacharequal}\ itrev\ xs\ {\isacharparenleft}x{\isacharhash}ys{\isacharparenright}{\isachardoublequote}%
19.19 \begin{isamarkuptext}%
19.20 -\noindent The behaviour of \isa{itrev} is simple: it reverses
19.21 +\noindent
19.22 +The behaviour of \isa{itrev} is simple: it reverses
19.23 its first argument by stacking its elements onto the second argument,
19.24 and returning that second argument when the first one becomes
19.25 empty. Note that \isa{itrev} is tail-recursive, i.e.\ it can be
19.26 @@ -36,18 +37,18 @@
19.27 \end{isabelle}
19.28 Just as predicted above, the overall goal, and hence the induction
19.29 hypothesis, is too weak to solve the induction step because of the fixed
19.30 -\isa{[]}. The corresponding heuristic:
19.31 +\isa{{\isacharbrackleft}{\isacharbrackright}}. The corresponding heuristic:
19.32 \begin{quote}
19.33 \emph{Generalize goals for induction by replacing constants by variables.}
19.34 \end{quote}
19.35 -Of course one cannot do this na\"{\i}vely: \isa{itrev xs ys = rev xs} is
19.36 +Of course one cannot do this na\"{\i}vely: \isa{itrev\ xs\ ys\ {\isacharequal}\ rev\ xs} is
19.37 just not true---the correct generalization is%
19.38 \end{isamarkuptxt}%
19.39 \isacommand{lemma}\ {\isachardoublequote}itrev\ xs\ ys\ {\isacharequal}\ rev\ xs\ {\isacharat}\ ys{\isachardoublequote}%
19.40 \begin{isamarkuptxt}%
19.41 \noindent
19.42 -If \isa{ys} is replaced by \isa{[]}, the right-hand side simplifies to
19.43 -\isa{rev\ \mbox{xs}}, just as required.
19.44 +If \isa{ys} is replaced by \isa{{\isacharbrackleft}{\isacharbrackright}}, the right-hand side simplifies to
19.45 +\isa{rev\ xs}, just as required.
19.46
19.47 In this particular instance it was easy to guess the right generalization,
19.48 but in more complex situations a good deal of creativity is needed. This is
19.49 @@ -59,12 +60,12 @@
19.50 \begin{isabelle}
19.51 ~1.~{\isasymAnd}a~list.\isanewline
19.52 ~~~~~~~itrev~list~ys~=~rev~list~@~ys~{\isasymLongrightarrow}\isanewline
19.53 -~~~~~~~itrev~list~(a~\#~ys)~=~rev~list~@~a~\#~ys%
19.54 +~~~~~~~itrev~list~(a~\#~ys)~=~rev~list~@~a~\#~ys
19.55 \end{isabelle}
19.56 The induction hypothesis is still too weak, but this time it takes no
19.57 intuition to generalize: the problem is that \isa{ys} is fixed throughout
19.58 the subgoal, but the induction hypothesis needs to be applied with
19.59 -\isa{\mbox{a}\ {\isacharhash}\ \mbox{ys}} instead of \isa{ys}. Hence we prove the theorem
19.60 +\isa{a\ {\isacharhash}\ ys} instead of \isa{ys}. Hence we prove the theorem
19.61 for all \isa{ys} instead of a fixed one:%
19.62 \end{isamarkuptxt}%
19.63 \isacommand{lemma}\ {\isachardoublequote}{\isasymforall}ys{\isachardot}\ itrev\ xs\ ys\ {\isacharequal}\ rev\ xs\ {\isacharat}\ ys{\isachardoublequote}%
20.1 --- a/doc-src/TutorialI/Misc/document/Tree2.tex Fri Sep 01 18:29:52 2000 +0200
20.2 +++ b/doc-src/TutorialI/Misc/document/Tree2.tex Fri Sep 01 19:09:44 2000 +0200
20.3 @@ -4,13 +4,13 @@
20.4 \begin{isamarkuptext}%
20.5 \noindent In Exercise~\ref{ex:Tree} we defined a function
20.6 \isa{flatten} from trees to lists. The straightforward version of
20.7 -\isa{flatten} is based on \isa{\at} and is thus, like \isa{rev}, quadratic.
20.8 -A linear time version of \isa{flatten} again reqires an extra
20.9 +\isa{flatten} is based on \isa{{\isacharat}} and is thus, like \isa{rev},
20.10 +quadratic. A linear time version of \isa{flatten} again reqires an extra
20.11 argument, the accumulator:%
20.12 \end{isamarkuptext}%
20.13 \isacommand{consts}\ flatten\isadigit{2}\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharprime}a\ tree\ {\isacharequal}{\isachargreater}\ {\isacharprime}a\ list\ {\isacharequal}{\isachargreater}\ {\isacharprime}a\ list{\isachardoublequote}%
20.14 \begin{isamarkuptext}%
20.15 -\noindent Define \isa{flatten2} and prove%
20.16 +\noindent Define \isa{flatten\isadigit{2}} and prove%
20.17 \end{isamarkuptext}%
20.18 \isacommand{lemma}\ {\isachardoublequote}flatten\isadigit{2}\ t\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ flatten\ t{\isachardoublequote}\end{isabellebody}%
20.19 %%% Local Variables:
21.1 --- a/doc-src/TutorialI/Misc/document/asm_simp.tex Fri Sep 01 18:29:52 2000 +0200
21.2 +++ b/doc-src/TutorialI/Misc/document/asm_simp.tex Fri Sep 01 19:09:44 2000 +0200
21.3 @@ -9,9 +9,9 @@
21.4 \isacommand{by}\ simp%
21.5 \begin{isamarkuptext}%
21.6 \noindent
21.7 -The second assumption simplifies to \isa{\mbox{xs}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}}, which in turn
21.8 -simplifies the first assumption to \isa{\mbox{zs}\ {\isacharequal}\ \mbox{ys}}, thus reducing the
21.9 -conclusion to \isa{\mbox{ys}\ {\isacharequal}\ \mbox{ys}} and hence to \isa{True}.
21.10 +The second assumption simplifies to \isa{xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}}, which in turn
21.11 +simplifies the first assumption to \isa{zs\ {\isacharequal}\ ys}, thus reducing the
21.12 +conclusion to \isa{ys\ {\isacharequal}\ ys} and hence to \isa{True}.
21.13
21.14 In some cases this may be too much of a good thing and may lead to
21.15 nontermination:%
21.16 @@ -20,7 +20,7 @@
21.17 \begin{isamarkuptxt}%
21.18 \noindent
21.19 cannot be solved by an unmodified application of \isa{simp} because the
21.20 -simplification rule \isa{\mbox{f}\ \mbox{x}\ {\isacharequal}\ \mbox{g}\ {\isacharparenleft}\mbox{f}\ {\isacharparenleft}\mbox{g}\ \mbox{x}{\isacharparenright}{\isacharparenright}} extracted from the assumption
21.21 +simplification rule \isa{f\ x\ {\isacharequal}\ g\ {\isacharparenleft}f\ {\isacharparenleft}g\ x{\isacharparenright}{\isacharparenright}} extracted from the assumption
21.22 does not terminate. Isabelle notices certain simple forms of
21.23 nontermination but not this one. The problem can be circumvented by
21.24 explicitly telling the simplifier to ignore the assumptions:%
21.25 @@ -30,17 +30,17 @@
21.26 \noindent
21.27 There are three options that influence the treatment of assumptions:
21.28 \begin{description}
21.29 -\item[\isa{(no_asm)}]\indexbold{*no_asm}
21.30 +\item[\isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharparenright}}]\indexbold{*no_asm}
21.31 means that assumptions are completely ignored.
21.32 -\item[\isa{(no_asm_simp)}]\indexbold{*no_asm_simp}
21.33 +\item[\isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharunderscore}simp{\isacharparenright}}]\indexbold{*no_asm_simp}
21.34 means that the assumptions are not simplified but
21.35 are used in the simplification of the conclusion.
21.36 -\item[\isa{(no_asm_use)}]\indexbold{*no_asm_use}
21.37 +\item[\isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharunderscore}use{\isacharparenright}}]\indexbold{*no_asm_use}
21.38 means that the assumptions are simplified but are not
21.39 used in the simplification of each other or the conclusion.
21.40 \end{description}
21.41 -Neither \isa{(no_asm_simp)} nor \isa{(no_asm_use)} allow to simplify the above
21.42 -problematic subgoal.
21.43 +Neither \isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharunderscore}simp{\isacharparenright}} nor \isa{{\isacharparenleft}no{\isacharunderscore}asm{\isacharunderscore}use{\isacharparenright}} allow to simplify
21.44 +the above problematic subgoal.
21.45
21.46 Note that only one of the above options is allowed, and it must precede all
21.47 other arguments.%
22.1 --- a/doc-src/TutorialI/Misc/document/case_exprs.tex Fri Sep 01 18:29:52 2000 +0200
22.2 +++ b/doc-src/TutorialI/Misc/document/case_exprs.tex Fri Sep 01 19:09:44 2000 +0200
22.3 @@ -10,14 +10,14 @@
22.4 \begin{quote}
22.5
22.6 \begin{isabelle}%
22.7 -case\ \mbox{xs}\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{1}\ {\isacharbar}\ \mbox{y}\ {\isacharhash}\ \mbox{ys}\ {\isasymRightarrow}\ \mbox{y}
22.8 +case\ xs\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{1}\ {\isacharbar}\ y\ {\isacharhash}\ ys\ {\isasymRightarrow}\ y
22.9 \end{isabelle}%
22.10
22.11 \end{quote}
22.12 -evaluates to \isa{\isadigit{1}} if \isa{\mbox{xs}} is \isa{{\isacharbrackleft}{\isacharbrackright}} and to \isa{\mbox{y}} if
22.13 -\isa{\mbox{xs}} is \isa{\mbox{y}\ {\isacharhash}\ \mbox{ys}}. (Since the result in both branches must be of
22.14 -the same type, it follows that \isa{\mbox{y}} is of type \isa{nat} and hence
22.15 -that \isa{\mbox{xs}} is of type \isa{nat\ list}.)
22.16 +evaluates to \isa{\isadigit{1}} if \isa{xs} is \isa{{\isacharbrackleft}{\isacharbrackright}} and to \isa{y} if
22.17 +\isa{xs} is \isa{y\ {\isacharhash}\ ys}. (Since the result in both branches must be of
22.18 +the same type, it follows that \isa{y} is of type \isa{nat} and hence
22.19 +that \isa{xs} is of type \isa{nat\ list}.)
22.20
22.21 In general, if $e$ is a term of the datatype $t$ defined in
22.22 \S\ref{sec:general-datatype} above, the corresponding
22.23 @@ -38,16 +38,17 @@
22.24 \noindent
22.25 Nested patterns can be simulated by nested \isa{case}-expressions: instead
22.26 of
22.27 -% case xs of [] => 1 | [x] => x | x#(y#zs) => y
22.28 -\begin{isabelle}
22.29 -~~~case~xs~of~[]~{\isasymRightarrow}~1~|~[x]~{\isasymRightarrow}~x~|~x~\#~y~\#~zs~{\isasymRightarrow}~y
22.30 -\end{isabelle}
22.31 +%
22.32 +\begin{quote}
22.33 +\isa{case\ xs\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}{\isachargreater}\ \isadigit{1}\ {\isacharbar}\ {\isacharbrackleft}x{\isacharbrackright}\ {\isacharequal}{\isachargreater}\ x\ {\isacharbar}\ x{\isacharhash}{\isacharparenleft}y{\isacharhash}zs{\isacharparenright}\ {\isacharequal}{\isachargreater}\ y}
22.34 +%~~~case~xs~of~[]~{\isasymRightarrow}~1~|~[x]~{\isasymRightarrow}~x~|~x~\#~y~\#~zs~{\isasymRightarrow}~y
22.35 +\end{quote}
22.36 write
22.37 \begin{quote}
22.38
22.39 \begin{isabelle}%
22.40 -case\ \mbox{xs}\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{1}\isanewline
22.41 -{\isacharbar}\ \mbox{x}\ {\isacharhash}\ \mbox{ys}\ {\isasymRightarrow}\ case\ \mbox{ys}\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \mbox{x}\ {\isacharbar}\ \mbox{y}\ {\isacharhash}\ \mbox{zs}\ {\isasymRightarrow}\ \mbox{y}
22.42 +case\ xs\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{1}\isanewline
22.43 +{\isacharbar}\ x\ {\isacharhash}\ ys\ {\isasymRightarrow}\ case\ ys\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ x\ {\isacharbar}\ y\ {\isacharhash}\ zs\ {\isasymRightarrow}\ y
22.44 \end{isabelle}%
22.45
22.46 \end{quote}
23.1 --- a/doc-src/TutorialI/Misc/document/case_splits.tex Fri Sep 01 18:29:52 2000 +0200
23.2 +++ b/doc-src/TutorialI/Misc/document/case_splits.tex Fri Sep 01 19:09:44 2000 +0200
23.3 @@ -2,7 +2,7 @@
23.4 \begin{isabellebody}%
23.5 %
23.6 \begin{isamarkuptext}%
23.7 -Goals containing \isaindex{if}-expressions are usually proved by case
23.8 +Goals containing \isa{if}-expressions are usually proved by case
23.9 distinction on the condition of the \isa{if}. For example the goal%
23.10 \end{isamarkuptext}%
23.11 \isacommand{lemma}\ {\isachardoublequote}{\isasymforall}xs{\isachardot}\ if\ xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ then\ rev\ xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ else\ rev\ xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isachardoublequote}%
23.12 @@ -10,18 +10,18 @@
23.13 \noindent
23.14 can be split into
23.15 \begin{isabelle}
23.16 -~1.~{\isasymforall}xs.~(xs~=~[]~{\isasymlongrightarrow}~rev~xs~=~[])~{\isasymand}~(xs~{\isasymnoteq}~[]~{\isasymlongrightarrow}~rev~xs~{\isasymnoteq}~[])%
23.17 +~1.~{\isasymforall}xs.~(xs~=~[]~{\isasymlongrightarrow}~rev~xs~=~[])~{\isasymand}~(xs~{\isasymnoteq}~[]~{\isasymlongrightarrow}~rev~xs~{\isasymnoteq}~[])
23.18 \end{isabelle}
23.19 by a degenerate form of simplification%
23.20 \end{isamarkuptxt}%
23.21 \isacommand{apply}{\isacharparenleft}simp\ only{\isacharcolon}\ split{\isacharcolon}\ split{\isacharunderscore}if{\isacharparenright}%
23.22 \begin{isamarkuptext}%
23.23 \noindent
23.24 -where no simplification rules are included (\isa{only:} is followed by the
23.25 +where no simplification rules are included (\isa{only{\isacharcolon}} is followed by the
23.26 empty list of theorems) but the rule \isaindexbold{split_if} for
23.27 -splitting \isa{if}s is added (via the modifier \isa{split:}). Because
23.28 +splitting \isa{if}s is added (via the modifier \isa{split{\isacharcolon}}). Because
23.29 case-splitting on \isa{if}s is almost always the right proof strategy, the
23.30 -simplifier performs it automatically. Try \isacommand{apply}\isa{(simp)}
23.31 +simplifier performs it automatically. Try \isacommand{apply}\isa{{\isacharparenleft}simp{\isacharparenright}}
23.32 on the initial goal above.
23.33
23.34 This splitting idea generalizes from \isa{if} to \isaindex{case}:%
23.35 @@ -32,7 +32,7 @@
23.36 becomes
23.37 \begin{isabelle}
23.38 ~1.~(xs~=~[]~{\isasymlongrightarrow}~zs~=~xs~@~zs)~{\isasymand}\isanewline
23.39 -~~~~({\isasymforall}a~list.~xs~=~a~\#~list~{\isasymlongrightarrow}~a~\#~list~@~zs~=~xs~@~zs)%
23.40 +~~~~({\isasymforall}a~list.~xs~=~a~\#~list~{\isasymlongrightarrow}~a~\#~list~@~zs~=~xs~@~zs)
23.41 \end{isabelle}
23.42 by typing%
23.43 \end{isamarkuptxt}%
23.44 @@ -41,16 +41,16 @@
23.45 \noindent
23.46 In contrast to \isa{if}-expressions, the simplifier does not split
23.47 \isa{case}-expressions by default because this can lead to nontermination
23.48 -in case of recursive datatypes. Again, if the \isa{only:} modifier is
23.49 +in case of recursive datatypes. Again, if the \isa{only{\isacharcolon}} modifier is
23.50 dropped, the above goal is solved,%
23.51 \end{isamarkuptext}%
23.52 \isacommand{by}{\isacharparenleft}simp\ split{\isacharcolon}\ list{\isachardot}split{\isacharparenright}%
23.53 \begin{isamarkuptext}%
23.54 \noindent%
23.55 -which \isacommand{apply}\isa{(simp)} alone will not do.
23.56 +which \isacommand{apply}\isa{{\isacharparenleft}simp{\isacharparenright}} alone will not do.
23.57
23.58 In general, every datatype $t$ comes with a theorem
23.59 -\isa{$t$.split} which can be declared to be a \bfindex{split rule} either
23.60 +$t$\isa{{\isachardot}split} which can be declared to be a \bfindex{split rule} either
23.61 locally as above, or by giving it the \isa{split} attribute globally:%
23.62 \end{isamarkuptext}%
23.63 \isacommand{lemmas}\ {\isacharbrackleft}split{\isacharbrackright}\ {\isacharequal}\ list{\isachardot}split%
23.64 @@ -68,20 +68,22 @@
23.65 \begin{isamarkuptext}%
23.66 The above split rules intentionally only affect the conclusion of a
23.67 subgoal. If you want to split an \isa{if} or \isa{case}-expression in
23.68 -the assumptions, you have to apply \isa{split\_if\_asm} or $t$\isa{.split_asm}:%
23.69 +the assumptions, you have to apply \isa{split{\isacharunderscore}if{\isacharunderscore}asm} or
23.70 +$t$\isa{{\isachardot}split{\isacharunderscore}asm}:%
23.71 \end{isamarkuptext}%
23.72 \isacommand{lemma}\ {\isachardoublequote}if\ xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ then\ ys\ {\isachartilde}{\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ else\ ys\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}{\isacharequal}{\isachargreater}\ xs\ {\isacharat}\ ys\ {\isachartilde}{\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isachardoublequote}\isanewline
23.73 \isacommand{apply}{\isacharparenleft}simp\ only{\isacharcolon}\ split{\isacharcolon}\ split{\isacharunderscore}if{\isacharunderscore}asm{\isacharparenright}%
23.74 \begin{isamarkuptxt}%
23.75 \noindent
23.76 In contrast to splitting the conclusion, this actually creates two
23.77 -separate subgoals (which are solved by \isa{simp\_all}):
23.78 +separate subgoals (which are solved by \isa{simp{\isacharunderscore}all}):
23.79 \begin{isabelle}
23.80 \ \isadigit{1}{\isachardot}\ {\isasymlbrakk}\mbox{xs}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharsemicolon}\ \mbox{ys}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isasymrbrakk}\ {\isasymLongrightarrow}\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharat}\ \mbox{ys}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}\isanewline
23.81 \ \isadigit{2}{\isachardot}\ {\isasymlbrakk}\mbox{xs}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}{\isacharsemicolon}\ \mbox{ys}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isasymrbrakk}\ {\isasymLongrightarrow}\ \mbox{xs}\ {\isacharat}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}
23.82 \end{isabelle}
23.83 If you need to split both in the assumptions and the conclusion,
23.84 -use $t$\isa{.splits} which subsumes $t$\isa{.split} and $t$\isa{.split_asm}.%
23.85 +use $t$\isa{{\isachardot}splits} which subsumes $t$\isa{{\isachardot}split} and
23.86 +$t$\isa{{\isachardot}split{\isacharunderscore}asm}. Analogously, there is \isa{if{\isacharunderscore}splits}.%
23.87 \end{isamarkuptxt}%
23.88 \end{isabellebody}%
23.89 %%% Local Variables:
24.1 --- a/doc-src/TutorialI/Misc/document/cond_rewr.tex Fri Sep 01 18:29:52 2000 +0200
24.2 +++ b/doc-src/TutorialI/Misc/document/cond_rewr.tex Fri Sep 01 19:09:44 2000 +0200
24.3 @@ -11,17 +11,17 @@
24.4 \noindent
24.5 Note the use of ``\ttindexboldpos{,}{$Isar}'' to string together a
24.6 sequence of methods. Assuming that the simplification rule
24.7 -\isa{{\isacharparenleft}rev\ \mbox{xs}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}\mbox{xs}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}}
24.8 +\isa{{\isacharparenleft}rev\ xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}}
24.9 is present as well,%
24.10 \end{isamarkuptext}%
24.11 \isacommand{lemma}\ {\isachardoublequote}xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymLongrightarrow}\ hd{\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharhash}\ tl{\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ rev\ xs{\isachardoublequote}%
24.12 \begin{isamarkuptext}%
24.13 \noindent
24.14 is proved by plain simplification:
24.15 -the conditional equation \isa{hd_Cons_tl} above
24.16 -can simplify \isa{hd(rev~xs)~\#~tl(rev~xs)} to \isa{rev xs}
24.17 -because the corresponding precondition \isa{rev xs \isasymnoteq\ []}
24.18 -simplifies to \isa{xs \isasymnoteq\ []}, which is exactly the local
24.19 +the conditional equation \isa{hd{\isacharunderscore}Cons{\isacharunderscore}tl} above
24.20 +can simplify \isa{hd\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharhash}\ tl\ {\isacharparenleft}rev\ xs{\isacharparenright}} to \isa{rev\ xs}
24.21 +because the corresponding precondition \isa{rev\ xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}}
24.22 +simplifies to \isa{xs\ {\isasymnoteq}\ {\isacharbrackleft}{\isacharbrackright}}, which is exactly the local
24.23 assumption of the subgoal.%
24.24 \end{isamarkuptext}%
24.25 \end{isabellebody}%
25.1 --- a/doc-src/TutorialI/Misc/document/let_rewr.tex Fri Sep 01 18:29:52 2000 +0200
25.2 +++ b/doc-src/TutorialI/Misc/document/let_rewr.tex Fri Sep 01 19:09:44 2000 +0200
25.3 @@ -1,10 +1,21 @@
25.4 %
25.5 \begin{isabellebody}%
25.6 +%
25.7 +\isamarkupsubsubsection{Simplifying let-expressions}
25.8 +%
25.9 +\begin{isamarkuptext}%
25.10 +\index{simplification!of let-expressions}
25.11 +Proving a goal containing \isaindex{let}-expressions almost invariably
25.12 +requires the \isa{let}-con\-structs to be expanded at some point. Since
25.13 +\isa{let}-\isa{in} is just syntactic sugar for a predefined constant
25.14 +(called \isa{Let}), expanding \isa{let}-constructs means rewriting with
25.15 +\isa{Let{\isacharunderscore}def}:%
25.16 +\end{isamarkuptext}%
25.17 \isacommand{lemma}\ {\isachardoublequote}{\isacharparenleft}let\ xs\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}\ in\ xs{\isacharat}ys{\isacharat}xs{\isacharparenright}\ {\isacharequal}\ ys{\isachardoublequote}\isanewline
25.18 \isacommand{by}{\isacharparenleft}simp\ add{\isacharcolon}\ Let{\isacharunderscore}def{\isacharparenright}%
25.19 \begin{isamarkuptext}%
25.20 If, in a particular context, there is no danger of a combinatorial explosion
25.21 -of nested \isa{let}s one could even add \isa{Let_def} permanently:%
25.22 +of nested \isa{let}s one could even add \isa{Let{\isacharunderscore}def} permanently:%
25.23 \end{isamarkuptext}%
25.24 \isacommand{lemmas}\ {\isacharbrackleft}simp{\isacharbrackright}\ {\isacharequal}\ Let{\isacharunderscore}def\end{isabellebody}%
25.25 %%% Local Variables:
26.1 --- a/doc-src/TutorialI/Misc/document/natsum.tex Fri Sep 01 18:29:52 2000 +0200
26.2 +++ b/doc-src/TutorialI/Misc/document/natsum.tex Fri Sep 01 19:09:44 2000 +0200
26.3 @@ -7,7 +7,7 @@
26.4 \begin{quote}
26.5
26.6 \begin{isabelle}%
26.7 -case\ \mbox{n}\ of\ \isadigit{0}\ {\isasymRightarrow}\ \isadigit{0}\ {\isacharbar}\ Suc\ \mbox{m}\ {\isasymRightarrow}\ \mbox{m}
26.8 +case\ n\ of\ \isadigit{0}\ {\isasymRightarrow}\ \isadigit{0}\ {\isacharbar}\ Suc\ m\ {\isasymRightarrow}\ m
26.9 \end{isabelle}%
26.10
26.11 \end{quote}
27.1 --- a/doc-src/TutorialI/Misc/document/pairs.tex Fri Sep 01 18:29:52 2000 +0200
27.2 +++ b/doc-src/TutorialI/Misc/document/pairs.tex Fri Sep 01 19:09:44 2000 +0200
27.3 @@ -16,8 +16,8 @@
27.4 In addition to explicit $\lambda$-abstractions, tuple patterns can be used in
27.5 most variable binding constructs. Typical examples are
27.6 \begin{quote}
27.7 -\isa{let\ {\isacharparenleft}\mbox{x}{\isacharcomma}\ \mbox{y}{\isacharparenright}\ {\isacharequal}\ \mbox{f}\ \mbox{z}\ in\ {\isacharparenleft}\mbox{y}{\isacharcomma}\ \mbox{x}{\isacharparenright}}\\
27.8 -\isa{case\ \mbox{xs}\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{0}\ {\isacharbar}\ {\isacharparenleft}\mbox{x}{\isacharcomma}\ \mbox{y}{\isacharparenright}\ {\isacharhash}\ \mbox{zs}\ {\isasymRightarrow}\ \mbox{x}\ {\isacharplus}\ \mbox{y}}
27.9 +\isa{let\ {\isacharparenleft}x{\isacharcomma}\ y{\isacharparenright}\ {\isacharequal}\ f\ z\ in\ {\isacharparenleft}y{\isacharcomma}\ x{\isacharparenright}}\\
27.10 +\isa{case\ xs\ of\ {\isacharbrackleft}{\isacharbrackright}\ {\isasymRightarrow}\ \isadigit{0}\ {\isacharbar}\ {\isacharparenleft}x{\isacharcomma}\ y{\isacharparenright}\ {\isacharhash}\ zs\ {\isasymRightarrow}\ x\ {\isacharplus}\ y}
27.11 \end{quote}
27.12 Further important examples are quantifiers and sets (see~\S\ref{quant-pats}).%
27.13 \end{isamarkuptext}%
28.1 --- a/doc-src/TutorialI/Misc/document/trace_simp.tex Fri Sep 01 18:29:52 2000 +0200
28.2 +++ b/doc-src/TutorialI/Misc/document/trace_simp.tex Fri Sep 01 19:09:44 2000 +0200
28.3 @@ -3,7 +3,7 @@
28.4 %
28.5 \begin{isamarkuptext}%
28.6 Using the simplifier effectively may take a bit of experimentation. Set the
28.7 -\ttindexbold{trace_simp} \rmindex{flag} to get a better idea of what is going
28.8 +\isaindexbold{trace_simp} \rmindex{flag} to get a better idea of what is going
28.9 on:%
28.10 \end{isamarkuptext}%
28.11 \isacommand{ML}\ {\isachardoublequote}set\ trace{\isacharunderscore}simp{\isachardoublequote}\isanewline
29.1 --- a/doc-src/TutorialI/Misc/document/types.tex Fri Sep 01 18:29:52 2000 +0200
29.2 +++ b/doc-src/TutorialI/Misc/document/types.tex Fri Sep 01 19:09:44 2000 +0200
29.3 @@ -24,8 +24,8 @@
29.4 \ \ \ \ \ exor{\isacharunderscore}def{\isacharcolon}\ {\isachardoublequote}exor\ A\ B\ {\isasymequiv}\ A\ {\isasymand}\ {\isasymnot}B\ {\isasymor}\ {\isasymnot}A\ {\isasymand}\ B{\isachardoublequote}%
29.5 \begin{isamarkuptext}%
29.6 \noindent%
29.7 -where \isacommand{defs}\indexbold{*defs} is a keyword and \isa{nand_def} and
29.8 -\isa{exor_def} are user-supplied names.
29.9 +where \isacommand{defs}\indexbold{*defs} is a keyword and
29.10 +\isa{nand{\isacharunderscore}def} and \isa{exor{\isacharunderscore}def} are user-supplied names.
29.11 The symbol \indexboldpos{\isasymequiv}{$IsaEq} is a special form of equality
29.12 that must be used in constant definitions.
29.13 Declarations and definitions can also be merged%
29.14 @@ -36,7 +36,7 @@
29.15 \ \ \ \ \ \ \ \ \ {\isachardoublequote}exor\isadigit{2}\ A\ B\ {\isasymequiv}\ {\isacharparenleft}A\ {\isasymor}\ B{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}{\isasymnot}A\ {\isasymor}\ {\isasymnot}B{\isacharparenright}{\isachardoublequote}%
29.16 \begin{isamarkuptext}%
29.17 \noindent\indexbold{*constdefs}%
29.18 -in which case the default name of each definition is \isa{$f$_def}, where
29.19 +in which case the default name of each definition is $f$\isa{{\isacharunderscore}def}, where
29.20 $f$ is the name of the defined constant.%
29.21 \end{isamarkuptext}%
29.22 \end{isabellebody}%
30.1 --- a/doc-src/TutorialI/Misc/let_rewr.thy Fri Sep 01 18:29:52 2000 +0200
30.2 +++ b/doc-src/TutorialI/Misc/let_rewr.thy Fri Sep 01 19:09:44 2000 +0200
30.3 @@ -1,12 +1,23 @@
30.4 (*<*)
30.5 theory let_rewr = Main:;
30.6 (*>*)
30.7 +
30.8 +subsubsection{*Simplifying let-expressions*}
30.9 +
30.10 +text{*\index{simplification!of let-expressions}
30.11 +Proving a goal containing \isaindex{let}-expressions almost invariably
30.12 +requires the @{text"let"}-con\-structs to be expanded at some point. Since
30.13 +@{text"let"}-@{text"in"} is just syntactic sugar for a predefined constant
30.14 +(called @{term"Let"}), expanding @{text"let"}-constructs means rewriting with
30.15 +@{thm[source]Let_def}:
30.16 +*}
30.17 +
30.18 lemma "(let xs = [] in xs@ys@xs) = ys";
30.19 by(simp add: Let_def);
30.20
30.21 text{*
30.22 If, in a particular context, there is no danger of a combinatorial explosion
30.23 -of nested \isa{let}s one could even add \isa{Let_def} permanently:
30.24 +of nested @{text"let"}s one could even add @{thm[source]Let_def} permanently:
30.25 *}
30.26 lemmas [simp] = Let_def;
30.27 (*<*)
31.1 --- a/doc-src/TutorialI/Misc/natsum.thy Fri Sep 01 18:29:52 2000 +0200
31.2 +++ b/doc-src/TutorialI/Misc/natsum.thy Fri Sep 01 19:09:44 2000 +0200
31.3 @@ -2,7 +2,7 @@
31.4 theory natsum = Main:;
31.5 (*>*)
31.6 text{*\noindent
31.7 -In particular, there are \isa{case}-expressions, for example
31.8 +In particular, there are @{text"case"}-expressions, for example
31.9 \begin{quote}
31.10 @{term[display]"case n of 0 => 0 | Suc m => m"}
31.11 \end{quote}
32.1 --- a/doc-src/TutorialI/Misc/pairs.thy Fri Sep 01 18:29:52 2000 +0200
32.2 +++ b/doc-src/TutorialI/Misc/pairs.thy Fri Sep 01 19:09:44 2000 +0200
32.3 @@ -4,7 +4,7 @@
32.4 text{*
32.5 HOL also has pairs: \isa{($a@1$,$a@2$)} is of type \isa{$\tau@1$ *
32.6 $\tau@2$} provided each $a@i$ is of type $\tau@i$. The components of a pair
32.7 -are extracted by \isa{fst} and \isa{snd}: \isa{fst($x$,$y$) = $x$} and
32.8 +are extracted by @{term"fst"} and @{term"snd"}: \isa{fst($x$,$y$) = $x$} and
32.9 \isa{snd($x$,$y$) = $y$}. Tuples are simulated by pairs nested to the right:
32.10 \isa{($a@1$,$a@2$,$a@3$)} stands for \isa{($a@1$,($a@2$,$a@3$))} and
32.11 \isa{$\tau@1$ * $\tau@2$ * $\tau@3$} for \isa{$\tau@1$ * ($\tau@2$ *
33.1 --- a/doc-src/TutorialI/Misc/prime_def.thy Fri Sep 01 18:29:52 2000 +0200
33.2 +++ b/doc-src/TutorialI/Misc/prime_def.thy Fri Sep 01 19:09:44 2000 +0200
33.3 @@ -6,8 +6,8 @@
33.4
33.5 "prime(p) \\<equiv> 1 < p \\<and> (m dvd p \\<longrightarrow> (m=1 \\<or> m=p))";
33.6 text{*\noindent\small
33.7 -where \isa{dvd} means ``divides''.
33.8 -Isabelle rejects this ``definition'' because of the extra \isa{m} on the
33.9 +where @{text"dvd"} means ``divides''.
33.10 +Isabelle rejects this ``definition'' because of the extra @{term"m"} on the
33.11 right-hand side, which would introduce an inconsistency (why?). What you
33.12 should have written is
33.13 *}
34.1 --- a/doc-src/TutorialI/Misc/trace_simp.thy Fri Sep 01 18:29:52 2000 +0200
34.2 +++ b/doc-src/TutorialI/Misc/trace_simp.thy Fri Sep 01 19:09:44 2000 +0200
34.3 @@ -4,7 +4,7 @@
34.4
34.5 text{*
34.6 Using the simplifier effectively may take a bit of experimentation. Set the
34.7 -\ttindexbold{trace_simp} \rmindex{flag} to get a better idea of what is going
34.8 +\isaindexbold{trace_simp} \rmindex{flag} to get a better idea of what is going
34.9 on:
34.10 *}
34.11 ML "set trace_simp";
35.1 --- a/doc-src/TutorialI/Misc/types.thy Fri Sep 01 18:29:52 2000 +0200
35.2 +++ b/doc-src/TutorialI/Misc/types.thy Fri Sep 01 19:09:44 2000 +0200
35.3 @@ -19,7 +19,7 @@
35.4 \label{sec:ConstDefinitions}
35.5 \indexbold{definition}
35.6
35.7 -The above constants \isa{nand} and \isa{exor} are non-recursive and can
35.8 +The above constants @{term"nand"} and @{term"exor"} are non-recursive and can
35.9 therefore be defined directly by
35.10 *}
35.11
35.12 @@ -27,8 +27,8 @@
35.13 exor_def: "exor A B \\<equiv> A \\<and> \\<not>B \\<or> \\<not>A \\<and> B";
35.14
35.15 text{*\noindent%
35.16 -where \isacommand{defs}\indexbold{*defs} is a keyword and \isa{nand_def} and
35.17 -\isa{exor_def} are user-supplied names.
35.18 +where \isacommand{defs}\indexbold{*defs} is a keyword and
35.19 +@{thm[source]nand_def} and @{thm[source]exor_def} are user-supplied names.
35.20 The symbol \indexboldpos{\isasymequiv}{$IsaEq} is a special form of equality
35.21 that must be used in constant definitions.
35.22 Declarations and definitions can also be merged
35.23 @@ -40,7 +40,7 @@
35.24 "exor2 A B \\<equiv> (A \\<or> B) \\<and> (\\<not>A \\<or> \\<not>B)";
35.25
35.26 text{*\noindent\indexbold{*constdefs}%
35.27 -in which case the default name of each definition is \isa{$f$_def}, where
35.28 +in which case the default name of each definition is $f$@{text"_def"}, where
35.29 $f$ is the name of the defined constant.*}(*<*)
35.30 end
35.31 (*>*)
36.1 --- a/doc-src/TutorialI/Recdef/Induction.thy Fri Sep 01 18:29:52 2000 +0200
36.2 +++ b/doc-src/TutorialI/Recdef/Induction.thy Fri Sep 01 19:09:44 2000 +0200
36.3 @@ -10,7 +10,7 @@
36.4 again induction. But this time the structural form of induction that comes
36.5 with datatypes is unlikely to work well---otherwise we could have defined the
36.6 function by \isacommand{primrec}. Therefore \isacommand{recdef} automatically
36.7 -proves a suitable induction rule $f$\isa{.induct} that follows the
36.8 +proves a suitable induction rule $f$@{text".induct"} that follows the
36.9 recursion pattern of the particular function $f$. We call this
36.10 \textbf{recursion induction}. Roughly speaking, it
36.11 requires you to prove for each \isacommand{recdef} equation that the property
36.12 @@ -21,16 +21,16 @@
36.13 lemma "map f (sep(x,xs)) = sep(f x, map f xs)";
36.14
36.15 txt{*\noindent
36.16 -involving the predefined \isa{map} functional on lists: \isa{map f xs}
36.17 -is the result of applying \isa{f} to all elements of \isa{xs}. We prove
36.18 -this lemma by recursion induction w.r.t. \isa{sep}:
36.19 +involving the predefined @{term"map"} functional on lists: @{term"map f xs"}
36.20 +is the result of applying @{term"f"} to all elements of @{term"xs"}. We prove
36.21 +this lemma by recursion induction w.r.t. @{term"sep"}:
36.22 *}
36.23
36.24 apply(induct_tac x xs rule: sep.induct);
36.25
36.26 txt{*\noindent
36.27 The resulting proof state has three subgoals corresponding to the three
36.28 -clauses for \isa{sep}:
36.29 +clauses for @{term"sep"}:
36.30 \begin{isabelle}
36.31 ~1.~{\isasymAnd}a.~map~f~(sep~(a,~[]))~=~sep~(f~a,~map~f~[])\isanewline
36.32 ~2.~{\isasymAnd}a~x.~map~f~(sep~(a,~[x]))~=~sep~(f~a,~map~f~[x])\isanewline
36.33 @@ -47,24 +47,24 @@
36.34 Try proving the above lemma by structural induction, and you find that you
36.35 need an additional case distinction. What is worse, the names of variables
36.36 are invented by Isabelle and have nothing to do with the names in the
36.37 -definition of \isa{sep}.
36.38 +definition of @{term"sep"}.
36.39
36.40 In general, the format of invoking recursion induction is
36.41 -\begin{ttbox}
36.42 -apply(induct_tac \(x@1 \dots x@n\) rule: \(f\).induct)
36.43 -\end{ttbox}\index{*induct_tac}%
36.44 +\begin{quote}
36.45 +\isacommand{apply}@{text"(induct_tac ("}$x@1 \dots x@n$ @{text"rule:"} $f$@{text".induct)"}
36.46 +\end{quote}\index{*induct_tac}%
36.47 where $x@1~\dots~x@n$ is a list of free variables in the subgoal and $f$ the
36.48 name of a function that takes an $n$-tuple. Usually the subgoal will
36.49 contain the term $f~x@1~\dots~x@n$ but this need not be the case. The
36.50 -induction rules do not mention $f$ at all. For example \isa{sep.induct}
36.51 +induction rules do not mention $f$ at all. For example @{thm[source]sep.induct}
36.52 \begin{isabelle}
36.53 {\isasymlbrakk}~{\isasymAnd}a.~P~a~[];\isanewline
36.54 ~~{\isasymAnd}a~x.~P~a~[x];\isanewline
36.55 ~~{\isasymAnd}a~x~y~zs.~P~a~(y~\#~zs)~{\isasymLongrightarrow}~P~a~(x~\#~y~\#~zs){\isasymrbrakk}\isanewline
36.56 {\isasymLongrightarrow}~P~u~v%
36.57 \end{isabelle}
36.58 -merely says that in order to prove a property \isa{P} of \isa{u} and
36.59 -\isa{v} you need to prove it for the three cases where \isa{v} is the
36.60 +merely says that in order to prove a property @{term"P"} of @{term"u"} and
36.61 +@{term"v"} you need to prove it for the three cases where @{term"v"} is the
36.62 empty list, the singleton list, and the list with at least two elements
36.63 (in which case you may assume it holds for the tail of that list).
36.64 *}
37.1 --- a/doc-src/TutorialI/Recdef/Nested1.thy Fri Sep 01 18:29:52 2000 +0200
37.2 +++ b/doc-src/TutorialI/Recdef/Nested1.thy Fri Sep 01 19:09:44 2000 +0200
37.3 @@ -26,9 +26,9 @@
37.4 @{term[display]"t : set ts --> size t < Suc (term_list_size ts)"}
37.5 \end{quote}
37.6 where @{term"set"} returns the set of elements of a list (no special
37.7 -knowledge of sets is required in the following) and @{name"term_list_size ::
37.8 +knowledge of sets is required in the following) and @{text"term_list_size ::
37.9 term list \<Rightarrow> nat"} is an auxiliary function automatically defined by Isabelle
37.10 -(when @{name"term"} was defined). First we have to understand why the
37.11 +(when @{text"term"} was defined). First we have to understand why the
37.12 recursive call of @{term"trev"} underneath @{term"map"} leads to the above
37.13 condition. The reason is that \isacommand{recdef} ``knows'' that @{term"map"}
37.14 will apply @{term"trev"} only to elements of @{term"ts"}. Thus the above
38.1 --- a/doc-src/TutorialI/Recdef/Nested2.thy Fri Sep 01 18:29:52 2000 +0200
38.2 +++ b/doc-src/TutorialI/Recdef/Nested2.thy Fri Sep 01 19:09:44 2000 +0200
38.3 @@ -19,7 +19,7 @@
38.4 applies it automatically and the above definition of @{term"trev"}
38.5 succeeds now. As a reward for our effort, we can now prove the desired
38.6 lemma directly. The key is the fact that we no longer need the verbose
38.7 -induction schema for type @{name"term"} but the simpler one arising from
38.8 +induction schema for type @{text"term"} but the simpler one arising from
38.9 @{term"trev"}:
38.10 *};
38.11
38.12 @@ -39,7 +39,7 @@
38.13 text{*\noindent
38.14 If the proof of the induction step mystifies you, we recommend to go through
38.15 the chain of simplification steps in detail; you will probably need the help of
38.16 -@{name"trace_simp"}.
38.17 +@{text"trace_simp"}.
38.18 %\begin{quote}
38.19 %{term[display]"trev(trev(App f ts))"}\\
38.20 %{term[display]"App f (rev(map trev (rev(map trev ts))))"}\\
38.21 @@ -66,7 +66,7 @@
38.22 \isacommand{recdef} would try to prove the unprovable @{term"size t < Suc
38.23 (term_list_size ts)"}, without any assumption about @{term"t"}. Therefore
38.24 \isacommand{recdef} has been supplied with the congruence theorem
38.25 -@{name"map_cong"}:
38.26 +@{thm[source]map_cong}:
38.27 \begin{quote}
38.28 @{thm[display,margin=50]"map_cong"[no_vars]}
38.29 \end{quote}
39.1 --- a/doc-src/TutorialI/Recdef/document/Induction.tex Fri Sep 01 18:29:52 2000 +0200
39.2 +++ b/doc-src/TutorialI/Recdef/document/Induction.tex Fri Sep 01 19:09:44 2000 +0200
39.3 @@ -9,7 +9,7 @@
39.4 again induction. But this time the structural form of induction that comes
39.5 with datatypes is unlikely to work well---otherwise we could have defined the
39.6 function by \isacommand{primrec}. Therefore \isacommand{recdef} automatically
39.7 -proves a suitable induction rule $f$\isa{.induct} that follows the
39.8 +proves a suitable induction rule $f$\isa{{\isachardot}induct} that follows the
39.9 recursion pattern of the particular function $f$. We call this
39.10 \textbf{recursion induction}. Roughly speaking, it
39.11 requires you to prove for each \isacommand{recdef} equation that the property
39.12 @@ -19,7 +19,7 @@
39.13 \isacommand{lemma}\ {\isachardoublequote}map\ f\ {\isacharparenleft}sep{\isacharparenleft}x{\isacharcomma}xs{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ sep{\isacharparenleft}f\ x{\isacharcomma}\ map\ f\ xs{\isacharparenright}{\isachardoublequote}%
39.14 \begin{isamarkuptxt}%
39.15 \noindent
39.16 -involving the predefined \isa{map} functional on lists: \isa{map f xs}
39.17 +involving the predefined \isa{map} functional on lists: \isa{map\ f\ xs}
39.18 is the result of applying \isa{f} to all elements of \isa{xs}. We prove
39.19 this lemma by recursion induction w.r.t. \isa{sep}:%
39.20 \end{isamarkuptxt}%
39.21 @@ -45,13 +45,13 @@
39.22 definition of \isa{sep}.
39.23
39.24 In general, the format of invoking recursion induction is
39.25 -\begin{ttbox}
39.26 -apply(induct_tac \(x@1 \dots x@n\) rule: \(f\).induct)
39.27 -\end{ttbox}\index{*induct_tac}%
39.28 +\begin{quote}
39.29 +\isacommand{apply}\isa{{\isacharparenleft}induct{\isacharunderscore}tac\ {\isacharparenleft}}$x@1 \dots x@n$ \isa{rule{\isacharcolon}} $f$\isa{{\isachardot}induct{\isacharparenright}}
39.30 +\end{quote}\index{*induct_tac}%
39.31 where $x@1~\dots~x@n$ is a list of free variables in the subgoal and $f$ the
39.32 name of a function that takes an $n$-tuple. Usually the subgoal will
39.33 contain the term $f~x@1~\dots~x@n$ but this need not be the case. The
39.34 -induction rules do not mention $f$ at all. For example \isa{sep.induct}
39.35 +induction rules do not mention $f$ at all. For example \isa{sep{\isachardot}induct}
39.36 \begin{isabelle}
39.37 {\isasymlbrakk}~{\isasymAnd}a.~P~a~[];\isanewline
39.38 ~~{\isasymAnd}a~x.~P~a~[x];\isanewline
40.1 --- a/doc-src/TutorialI/Recdef/document/Nested1.tex Fri Sep 01 18:29:52 2000 +0200
40.2 +++ b/doc-src/TutorialI/Recdef/document/Nested1.tex Fri Sep 01 19:09:44 2000 +0200
40.3 @@ -23,7 +23,7 @@
40.4 \begin{quote}
40.5
40.6 \begin{isabelle}%
40.7 -\mbox{t}\ {\isasymin}\ set\ \mbox{ts}\ {\isasymlongrightarrow}\ size\ \mbox{t}\ {\isacharless}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ \mbox{ts}{\isacharparenright}
40.8 +t\ {\isasymin}\ set\ ts\ {\isasymlongrightarrow}\ size\ t\ {\isacharless}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}
40.9 \end{isabelle}%
40.10
40.11 \end{quote}
40.12 @@ -32,9 +32,9 @@
40.13 (when \isa{term} was defined). First we have to understand why the
40.14 recursive call of \isa{trev} underneath \isa{map} leads to the above
40.15 condition. The reason is that \isacommand{recdef} ``knows'' that \isa{map}
40.16 -will apply \isa{trev} only to elements of \isa{\mbox{ts}}. Thus the above
40.17 -condition expresses that the size of the argument \isa{\mbox{t}\ {\isasymin}\ set\ \mbox{ts}} of any
40.18 -recursive call of \isa{trev} is strictly less than \isa{size\ {\isacharparenleft}App\ \mbox{f}\ \mbox{ts}{\isacharparenright}\ {\isacharequal}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ \mbox{ts}{\isacharparenright}}. We will now prove the termination condition and
40.19 +will apply \isa{trev} only to elements of \isa{ts}. Thus the above
40.20 +condition expresses that the size of the argument \isa{t\ {\isasymin}\ set\ ts} of any
40.21 +recursive call of \isa{trev} is strictly less than \isa{size\ {\isacharparenleft}App\ f\ ts{\isacharparenright}\ {\isacharequal}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}}. We will now prove the termination condition and
40.22 continue with our definition. Below we return to the question of how
40.23 \isacommand{recdef} ``knows'' about \isa{map}.%
40.24 \end{isamarkuptext}%
41.1 --- a/doc-src/TutorialI/Recdef/document/Nested2.tex Fri Sep 01 18:29:52 2000 +0200
41.2 +++ b/doc-src/TutorialI/Recdef/document/Nested2.tex Fri Sep 01 19:09:44 2000 +0200
41.3 @@ -21,12 +21,12 @@
41.4 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ t\ rule{\isacharcolon}trev{\isachardot}induct{\isacharparenright}%
41.5 \begin{isamarkuptxt}%
41.6 \noindent
41.7 -This leaves us with a trivial base case \isa{trev\ {\isacharparenleft}trev\ {\isacharparenleft}Var\ \mbox{x}{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ Var\ \mbox{x}} and the step case
41.8 +This leaves us with a trivial base case \isa{trev\ {\isacharparenleft}trev\ {\isacharparenleft}Var\ x{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ Var\ x} and the step case
41.9 \begin{quote}
41.10
41.11 \begin{isabelle}%
41.12 -{\isasymforall}\mbox{t}{\isachardot}\ \mbox{t}\ {\isasymin}\ set\ \mbox{ts}\ {\isasymlongrightarrow}\ trev\ {\isacharparenleft}trev\ \mbox{t}{\isacharparenright}\ {\isacharequal}\ \mbox{t}\ {\isasymLongrightarrow}\isanewline
41.13 -trev\ {\isacharparenleft}trev\ {\isacharparenleft}App\ \mbox{f}\ \mbox{ts}{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ App\ \mbox{f}\ \mbox{ts}
41.14 +{\isasymforall}t{\isachardot}\ t\ {\isasymin}\ set\ ts\ {\isasymlongrightarrow}\ trev\ {\isacharparenleft}trev\ t{\isacharparenright}\ {\isacharequal}\ t\ {\isasymLongrightarrow}\isanewline
41.15 +trev\ {\isacharparenleft}trev\ {\isacharparenleft}App\ f\ ts{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ App\ f\ ts
41.16 \end{isabelle}%
41.17
41.18 \end{quote}
41.19 @@ -50,7 +50,7 @@
41.20
41.21 The above definition of \isa{trev} is superior to the one in
41.22 \S\ref{sec:nested-datatype} because it brings \isa{rev} into play, about
41.23 -which already know a lot, in particular \isa{rev\ {\isacharparenleft}rev\ \mbox{xs}{\isacharparenright}\ {\isacharequal}\ \mbox{xs}}.
41.24 +which already know a lot, in particular \isa{rev\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs}.
41.25 Thus this proof is a good example of an important principle:
41.26 \begin{quote}
41.27 \emph{Chose your definitions carefully\\
41.28 @@ -60,15 +60,15 @@
41.29 Let us now return to the question of how \isacommand{recdef} can come up with
41.30 sensible termination conditions in the presence of higher-order functions
41.31 like \isa{map}. For a start, if nothing were known about \isa{map},
41.32 -\isa{map\ trev\ \mbox{ts}} might apply \isa{trev} to arbitrary terms, and thus
41.33 -\isacommand{recdef} would try to prove the unprovable \isa{size\ \mbox{t}\ {\isacharless}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ \mbox{ts}{\isacharparenright}}, without any assumption about \isa{\mbox{t}}. Therefore
41.34 +\isa{map\ trev\ ts} might apply \isa{trev} to arbitrary terms, and thus
41.35 +\isacommand{recdef} would try to prove the unprovable \isa{size\ t\ {\isacharless}\ Suc\ {\isacharparenleft}term{\isacharunderscore}list{\isacharunderscore}size\ ts{\isacharparenright}}, without any assumption about \isa{t}. Therefore
41.36 \isacommand{recdef} has been supplied with the congruence theorem
41.37 \isa{map{\isacharunderscore}cong}:
41.38 \begin{quote}
41.39
41.40 \begin{isabelle}%
41.41 -{\isasymlbrakk}\mbox{xs}\ {\isacharequal}\ \mbox{ys}{\isacharsemicolon}\ {\isasymAnd}\mbox{x}{\isachardot}\ \mbox{x}\ {\isasymin}\ set\ \mbox{ys}\ {\isasymLongrightarrow}\ \mbox{f}\ \mbox{x}\ {\isacharequal}\ \mbox{g}\ \mbox{x}{\isasymrbrakk}\isanewline
41.42 -{\isasymLongrightarrow}\ map\ \mbox{f}\ \mbox{xs}\ {\isacharequal}\ map\ \mbox{g}\ \mbox{ys}
41.43 +{\isasymlbrakk}xs\ {\isacharequal}\ ys{\isacharsemicolon}\ {\isasymAnd}x{\isachardot}\ x\ {\isasymin}\ set\ ys\ {\isasymLongrightarrow}\ f\ x\ {\isacharequal}\ g\ x{\isasymrbrakk}\isanewline
41.44 +{\isasymLongrightarrow}\ map\ f\ xs\ {\isacharequal}\ map\ g\ ys
41.45 \end{isabelle}%
41.46
41.47 \end{quote}
42.1 --- a/doc-src/TutorialI/Recdef/document/examples.tex Fri Sep 01 18:29:52 2000 +0200
42.2 +++ b/doc-src/TutorialI/Recdef/document/examples.tex Fri Sep 01 19:09:44 2000 +0200
42.3 @@ -12,13 +12,13 @@
42.4 \begin{isamarkuptext}%
42.5 \noindent
42.6 The definition of \isa{fib} is accompanied by a \bfindex{measure function}
42.7 -\isa{{\isasymlambda}\mbox{n}{\isachardot}\ \mbox{n}} which maps the argument of \isa{fib} to a
42.8 +\isa{{\isasymlambda}n{\isachardot}\ n} which maps the argument of \isa{fib} to a
42.9 natural number. The requirement is that in each equation the measure of the
42.10 argument on the left-hand side is strictly greater than the measure of the
42.11 argument of each recursive call. In the case of \isa{fib} this is
42.12 obviously true because the measure function is the identity and
42.13 -\isa{Suc\ {\isacharparenleft}Suc\ \mbox{x}{\isacharparenright}} is strictly greater than both \isa{x} and
42.14 -\isa{Suc\ \mbox{x}}.
42.15 +\isa{Suc\ {\isacharparenleft}Suc\ x{\isacharparenright}} is strictly greater than both \isa{x} and
42.16 +\isa{Suc\ x}.
42.17
42.18 Slightly more interesting is the insertion of a fixed element
42.19 between any two elements of a list:%
42.20 @@ -50,7 +50,7 @@
42.21 \begin{isamarkuptext}%
42.22 \noindent
42.23 This defines exactly the same function as \isa{sep} above, i.e.\
42.24 -\isa{sep1 = sep}.
42.25 +\isa{sep\isadigit{1}\ {\isacharequal}\ sep}.
42.26
42.27 \begin{warn}
42.28 \isacommand{recdef} only takes the first argument of a (curried)
42.29 @@ -76,7 +76,7 @@
42.30 \begin{isamarkuptext}%
42.31 \noindent
42.32 For non-recursive functions the termination measure degenerates to the empty
42.33 -set \isa{\{\}}.%
42.34 +set \isa{{\isacharbraceleft}{\isacharbraceright}}.%
42.35 \end{isamarkuptext}%
42.36 \end{isabellebody}%
42.37 %%% Local Variables:
43.1 --- a/doc-src/TutorialI/Recdef/document/simplification.tex Fri Sep 01 18:29:52 2000 +0200
43.2 +++ b/doc-src/TutorialI/Recdef/document/simplification.tex Fri Sep 01 19:09:44 2000 +0200
43.3 @@ -19,7 +19,7 @@
43.4 \begin{quote}
43.5
43.6 \begin{isabelle}%
43.7 -\mbox{n}\ {\isasymnoteq}\ \isadigit{0}\ {\isasymLongrightarrow}\ \mbox{m}\ mod\ \mbox{n}\ {\isacharless}\ \mbox{n}
43.8 +n\ {\isasymnoteq}\ \isadigit{0}\ {\isasymLongrightarrow}\ m\ mod\ n\ {\isacharless}\ n
43.9 \end{isabelle}%
43.10
43.11 \end{quote}
43.12 @@ -34,7 +34,7 @@
43.13 \begin{quote}
43.14
43.15 \begin{isabelle}%
43.16 -gcd\ {\isacharparenleft}\mbox{m}{\isacharcomma}\ \mbox{n}{\isacharparenright}\ {\isacharequal}\ \mbox{k}
43.17 +gcd\ {\isacharparenleft}m{\isacharcomma}\ n{\isacharparenright}\ {\isacharequal}\ k
43.18 \end{isabelle}%
43.19
43.20 \end{quote}
43.21 @@ -42,7 +42,7 @@
43.22 \begin{quote}
43.23
43.24 \begin{isabelle}%
43.25 -{\isacharparenleft}if\ \mbox{n}\ {\isacharequal}\ \isadigit{0}\ then\ \mbox{m}\ else\ gcd\ {\isacharparenleft}\mbox{n}{\isacharcomma}\ \mbox{m}\ mod\ \mbox{n}{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ \mbox{k}
43.26 +{\isacharparenleft}if\ n\ {\isacharequal}\ \isadigit{0}\ then\ m\ else\ gcd\ {\isacharparenleft}n{\isacharcomma}\ m\ mod\ n{\isacharparenright}{\isacharparenright}\ {\isacharequal}\ k
43.27 \end{isabelle}%
43.28
43.29 \end{quote}
43.30 @@ -50,19 +50,20 @@
43.31 \begin{quote}
43.32
43.33 \begin{isabelle}%
43.34 -{\isacharparenleft}\mbox{n}\ {\isacharequal}\ \isadigit{0}\ {\isasymlongrightarrow}\ \mbox{m}\ {\isacharequal}\ \mbox{k}{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}\mbox{n}\ {\isasymnoteq}\ \isadigit{0}\ {\isasymlongrightarrow}\ gcd\ {\isacharparenleft}\mbox{n}{\isacharcomma}\ \mbox{m}\ mod\ \mbox{n}{\isacharparenright}\ {\isacharequal}\ \mbox{k}{\isacharparenright}
43.35 +{\isacharparenleft}n\ {\isacharequal}\ \isadigit{0}\ {\isasymlongrightarrow}\ m\ {\isacharequal}\ k{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}n\ {\isasymnoteq}\ \isadigit{0}\ {\isasymlongrightarrow}\ gcd\ {\isacharparenleft}n{\isacharcomma}\ m\ mod\ n{\isacharparenright}\ {\isacharequal}\ k{\isacharparenright}
43.36 \end{isabelle}%
43.37
43.38 \end{quote}
43.39 -Since the recursive call \isa{gcd\ {\isacharparenleft}\mbox{n}{\isacharcomma}\ \mbox{m}\ mod\ \mbox{n}{\isacharparenright}} is no longer protected by
43.40 +Since the recursive call \isa{gcd\ {\isacharparenleft}n{\isacharcomma}\ m\ mod\ n{\isacharparenright}} is no longer protected by
43.41 an \isa{if}, it is unfolded again, which leads to an infinite chain of
43.42 simplification steps. Fortunately, this problem can be avoided in many
43.43 different ways.
43.44
43.45 -The most radical solution is to disable the offending \\isa{split{\isacharunderscore}if} as
43.46 -shown in the section on case splits in \S\ref{sec:Simplification}. However,
43.47 -we do not recommend this because it means you will often have to invoke the
43.48 -rule explicitly when \isa{if} is involved.
43.49 +The most radical solution is to disable the offending
43.50 +\isa{split{\isacharunderscore}if} as shown in the section on case splits in
43.51 +\S\ref{sec:Simplification}. However, we do not recommend this because it
43.52 +means you will often have to invoke the rule explicitly when \isa{if} is
43.53 +involved.
43.54
43.55 If possible, the definition should be given by pattern matching on the left
43.56 rather than \isa{if} on the right. In the case of \isa{gcd} the
43.57 @@ -75,7 +76,7 @@
43.58 \begin{isamarkuptext}%
43.59 \noindent
43.60 Note that the order of equations is important and hides the side condition
43.61 -\isa{\mbox{n}\ {\isasymnoteq}\ \isadigit{0}}. Unfortunately, in general the case distinction
43.62 +\isa{n\ {\isasymnoteq}\ \isadigit{0}}. Unfortunately, in general the case distinction
43.63 may not be expressible by pattern matching.
43.64
43.65 A very simple alternative is to replace \isa{if} by \isa{case}, which
44.1 --- a/doc-src/TutorialI/Recdef/document/termination.tex Fri Sep 01 18:29:52 2000 +0200
44.2 +++ b/doc-src/TutorialI/Recdef/document/termination.tex Fri Sep 01 19:09:44 2000 +0200
44.3 @@ -6,10 +6,10 @@
44.4 its termination with the help of the user-supplied measure. All of the above
44.5 examples are simple enough that Isabelle can prove automatically that the
44.6 measure of the argument goes down in each recursive call. As a result,
44.7 -\isa{$f$.simps} will contain the defining equations (or variants derived from
44.8 -them) as theorems. For example, look (via \isacommand{thm}) at
44.9 -\isa{sep.simps} and \isa{sep1.simps} to see that they define the same
44.10 -function. What is more, those equations are automatically declared as
44.11 +$f$\isa{{\isachardot}simps} will contain the defining equations (or variants derived
44.12 +from them) as theorems. For example, look (via \isacommand{thm}) at
44.13 +\isa{sep{\isachardot}simps} and \isa{sep\isadigit{1}{\isachardot}simps} to see that they define
44.14 +the same function. What is more, those equations are automatically declared as
44.15 simplification rules.
44.16
44.17 In general, Isabelle may not be able to prove all termination condition
44.18 @@ -29,7 +29,7 @@
44.19 \isacommand{lemma}\ termi{\isacharunderscore}lem{\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}{\isasymnot}\ x\ {\isasymle}\ y\ {\isasymLongrightarrow}\ x\ {\isacharminus}\ Suc\ y\ {\isacharless}\ x\ {\isacharminus}\ y{\isachardoublequote}%
44.20 \begin{isamarkuptxt}%
44.21 \noindent
44.22 -It was not proved automatically because of the special nature of \isa{-}
44.23 +It was not proved automatically because of the special nature of \isa{{\isacharminus}}
44.24 on \isa{nat}. This requires more arithmetic than is tried by default:%
44.25 \end{isamarkuptxt}%
44.26 \isacommand{by}{\isacharparenleft}arith{\isacharparenright}%
44.27 @@ -44,8 +44,8 @@
44.28 \ \ {\isachardoublequote}g{\isacharparenleft}x{\isacharcomma}y{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}if\ x\ {\isasymle}\ y\ then\ x\ else\ g{\isacharparenleft}x{\isacharcomma}y{\isacharplus}\isadigit{1}{\isacharparenright}{\isacharparenright}{\isachardoublequote}%
44.29 \begin{isamarkuptext}%
44.30 \noindent
44.31 -This time everything works fine. Now \isa{g.simps} contains precisely the
44.32 -stated recursion equation for \isa{g} and they are simplification
44.33 +This time everything works fine. Now \isa{g{\isachardot}simps} contains precisely
44.34 +the stated recursion equation for \isa{g} and they are simplification
44.35 rules. Thus we can automatically prove%
44.36 \end{isamarkuptext}%
44.37 \isacommand{theorem}\ wow{\isacharcolon}\ {\isachardoublequote}g{\isacharparenleft}\isadigit{1}{\isacharcomma}\isadigit{0}{\isacharparenright}\ {\isacharequal}\ g{\isacharparenleft}\isadigit{1}{\isacharcomma}\isadigit{1}{\isacharparenright}{\isachardoublequote}\isanewline
44.38 @@ -54,7 +54,7 @@
44.39 \noindent
44.40 More exciting theorems require induction, which is discussed below.
44.41
44.42 -Because lemma \isa{termi_lem} above was only turned into a
44.43 +Because lemma \isa{termi{\isacharunderscore}lem} above was only turned into a
44.44 simplification rule for the sake of the termination proof, we may want to
44.45 disable it again:%
44.46 \end{isamarkuptext}%
44.47 @@ -63,22 +63,23 @@
44.48 The attentive reader may wonder why we chose to call our function \isa{g}
44.49 rather than \isa{f} the second time around. The reason is that, despite
44.50 the failed termination proof, the definition of \isa{f} did not
44.51 -fail (and thus we could not define it a second time). However, all theorems
44.52 -about \isa{f}, for example \isa{f.simps}, carry as a precondition the
44.53 -unproved termination condition. Moreover, the theorems \isa{f.simps} are
44.54 -not simplification rules. However, this mechanism allows a delayed proof of
44.55 -termination: instead of proving \isa{termi_lem} up front, we could prove
44.56 +fail, and thus we could not define it a second time. However, all theorems
44.57 +about \isa{f}, for example \isa{f{\isachardot}simps}, carry as a precondition
44.58 +the unproved termination condition. Moreover, the theorems
44.59 +\isa{f{\isachardot}simps} are not simplification rules. However, this mechanism
44.60 +allows a delayed proof of termination: instead of proving
44.61 +\isa{termi{\isacharunderscore}lem} up front, we could prove
44.62 it later on and then use it to remove the preconditions from the theorems
44.63 about \isa{f}. In most cases this is more cumbersome than proving things
44.64 -up front
44.65 +up front.
44.66 %FIXME, with one exception: nested recursion.
44.67
44.68 Although all the above examples employ measure functions, \isacommand{recdef}
44.69 allows arbitrary wellfounded relations. For example, termination of
44.70 -Ackermann's function requires the lexicographic product \isa{<*lex*>}:%
44.71 +Ackermann's function requires the lexicographic product \isa{{\isacharless}{\isacharasterisk}lex{\isacharasterisk}{\isachargreater}}:%
44.72 \end{isamarkuptext}%
44.73 \isacommand{consts}\ ack\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}nat{\isacharasterisk}nat\ {\isasymRightarrow}\ nat{\isachardoublequote}\isanewline
44.74 -\isacommand{recdef}\ ack\ {\isachardoublequote}measure{\isacharparenleft}{\isacharpercent}m{\isachardot}\ m{\isacharparenright}\ {\isacharless}{\isacharasterisk}lex{\isacharasterisk}{\isachargreater}\ measure{\isacharparenleft}{\isacharpercent}n{\isachardot}\ n{\isacharparenright}{\isachardoublequote}\isanewline
44.75 +\isacommand{recdef}\ ack\ {\isachardoublequote}measure{\isacharparenleft}{\isasymlambda}m{\isachardot}\ m{\isacharparenright}\ {\isacharless}{\isacharasterisk}lex{\isacharasterisk}{\isachargreater}\ measure{\isacharparenleft}{\isasymlambda}n{\isachardot}\ n{\isacharparenright}{\isachardoublequote}\isanewline
44.76 \ \ {\isachardoublequote}ack{\isacharparenleft}\isadigit{0}{\isacharcomma}n{\isacharparenright}\ \ \ \ \ \ \ \ \ {\isacharequal}\ Suc\ n{\isachardoublequote}\isanewline
44.77 \ \ {\isachardoublequote}ack{\isacharparenleft}Suc\ m{\isacharcomma}\isadigit{0}{\isacharparenright}\ \ \ \ \ {\isacharequal}\ ack{\isacharparenleft}m{\isacharcomma}\ \isadigit{1}{\isacharparenright}{\isachardoublequote}\isanewline
44.78 \ \ {\isachardoublequote}ack{\isacharparenleft}Suc\ m{\isacharcomma}Suc\ n{\isacharparenright}\ {\isacharequal}\ ack{\isacharparenleft}m{\isacharcomma}ack{\isacharparenleft}Suc\ m{\isacharcomma}n{\isacharparenright}{\isacharparenright}{\isachardoublequote}%
45.1 --- a/doc-src/TutorialI/Recdef/examples.thy Fri Sep 01 18:29:52 2000 +0200
45.2 +++ b/doc-src/TutorialI/Recdef/examples.thy Fri Sep 01 19:09:44 2000 +0200
45.3 @@ -13,13 +13,13 @@
45.4 "fib (Suc(Suc x)) = fib x + fib (Suc x)";
45.5
45.6 text{*\noindent
45.7 -The definition of \isa{fib} is accompanied by a \bfindex{measure function}
45.8 -@{term"%n. n"} which maps the argument of \isa{fib} to a
45.9 +The definition of @{term"fib"} is accompanied by a \bfindex{measure function}
45.10 +@{term"%n. n"} which maps the argument of @{term"fib"} to a
45.11 natural number. The requirement is that in each equation the measure of the
45.12 argument on the left-hand side is strictly greater than the measure of the
45.13 -argument of each recursive call. In the case of \isa{fib} this is
45.14 +argument of each recursive call. In the case of @{term"fib"} this is
45.15 obviously true because the measure function is the identity and
45.16 -@{term"Suc(Suc x)"} is strictly greater than both \isa{x} and
45.17 +@{term"Suc(Suc x)"} is strictly greater than both @{term"x"} and
45.18 @{term"Suc x"}.
45.19
45.20 Slightly more interesting is the insertion of a fixed element
45.21 @@ -55,8 +55,8 @@
45.22 "sep1(a, xs) = xs";
45.23
45.24 text{*\noindent
45.25 -This defines exactly the same function as \isa{sep} above, i.e.\
45.26 -\isa{sep1 = sep}.
45.27 +This defines exactly the same function as @{term"sep"} above, i.e.\
45.28 +@{prop"sep1 = sep"}.
45.29
45.30 \begin{warn}
45.31 \isacommand{recdef} only takes the first argument of a (curried)
45.32 @@ -84,7 +84,7 @@
45.33
45.34 text{*\noindent
45.35 For non-recursive functions the termination measure degenerates to the empty
45.36 -set \isa{\{\}}.
45.37 +set @{term"{}"}.
45.38 *}
45.39
45.40 (*<*)
46.1 --- a/doc-src/TutorialI/Recdef/simplification.thy Fri Sep 01 18:29:52 2000 +0200
46.2 +++ b/doc-src/TutorialI/Recdef/simplification.thy Fri Sep 01 19:09:44 2000 +0200
46.3 @@ -7,7 +7,7 @@
46.4 equations become simplification rules, just as with
46.5 \isacommand{primrec}. In most cases this works fine, but there is a subtle
46.6 problem that must be mentioned: simplification may not
46.7 -terminate because of automatic splitting of @{name"if"}.
46.8 +terminate because of automatic splitting of @{text"if"}.
46.9 Let us look at an example:
46.10 *}
46.11
46.12 @@ -24,9 +24,9 @@
46.13 is provded automatically because it is already present as a lemma in the
46.14 arithmetic library. Thus the recursion equation becomes a simplification
46.15 rule. Of course the equation is nonterminating if we are allowed to unfold
46.16 -the recursive call inside the @{name"if"} branch, which is why programming
46.17 +the recursive call inside the @{text"if"} branch, which is why programming
46.18 languages and our simplifier don't do that. Unfortunately the simplifier does
46.19 -something else which leads to the same problem: it splits @{name"if"}s if the
46.20 +something else which leads to the same problem: it splits @{text"if"}s if the
46.21 condition simplifies to neither @{term"True"} nor @{term"False"}. For
46.22 example, simplification reduces
46.23 \begin{quote}
46.24 @@ -41,17 +41,18 @@
46.25 @{term[display]"(n=0 --> m=k) & (n ~= 0 --> gcd(n, m mod n)=k)"}
46.26 \end{quote}
46.27 Since the recursive call @{term"gcd(n, m mod n)"} is no longer protected by
46.28 -an @{name"if"}, it is unfolded again, which leads to an infinite chain of
46.29 +an @{text"if"}, it is unfolded again, which leads to an infinite chain of
46.30 simplification steps. Fortunately, this problem can be avoided in many
46.31 different ways.
46.32
46.33 -The most radical solution is to disable the offending \@{name"split_if"} as
46.34 -shown in the section on case splits in \S\ref{sec:Simplification}. However,
46.35 -we do not recommend this because it means you will often have to invoke the
46.36 -rule explicitly when @{name"if"} is involved.
46.37 +The most radical solution is to disable the offending
46.38 +@{thm[source]split_if} as shown in the section on case splits in
46.39 +\S\ref{sec:Simplification}. However, we do not recommend this because it
46.40 +means you will often have to invoke the rule explicitly when @{text"if"} is
46.41 +involved.
46.42
46.43 If possible, the definition should be given by pattern matching on the left
46.44 -rather than @{name"if"} on the right. In the case of @{term"gcd"} the
46.45 +rather than @{text"if"} on the right. In the case of @{term"gcd"} the
46.46 following alternative definition suggests itself:
46.47 *}
46.48
46.49 @@ -66,7 +67,7 @@
46.50 @{prop"n ~= 0"}. Unfortunately, in general the case distinction
46.51 may not be expressible by pattern matching.
46.52
46.53 -A very simple alternative is to replace @{name"if"} by @{name"case"}, which
46.54 +A very simple alternative is to replace @{text"if"} by @{text"case"}, which
46.55 is also available for @{typ"bool"} but is not split automatically:
46.56 *}
46.57
47.1 --- a/doc-src/TutorialI/Recdef/termination.thy Fri Sep 01 18:29:52 2000 +0200
47.2 +++ b/doc-src/TutorialI/Recdef/termination.thy Fri Sep 01 19:09:44 2000 +0200
47.3 @@ -1,5 +1,5 @@
47.4 (*<*)
47.5 -theory termination = Main:;
47.6 +theory termination = examples:
47.7 (*>*)
47.8
47.9 text{*
47.10 @@ -7,10 +7,10 @@
47.11 its termination with the help of the user-supplied measure. All of the above
47.12 examples are simple enough that Isabelle can prove automatically that the
47.13 measure of the argument goes down in each recursive call. As a result,
47.14 -\isa{$f$.simps} will contain the defining equations (or variants derived from
47.15 -them) as theorems. For example, look (via \isacommand{thm}) at
47.16 -\isa{sep.simps} and \isa{sep1.simps} to see that they define the same
47.17 -function. What is more, those equations are automatically declared as
47.18 +$f$@{text".simps"} will contain the defining equations (or variants derived
47.19 +from them) as theorems. For example, look (via \isacommand{thm}) at
47.20 +@{thm[source]sep.simps} and @{thm[source]sep1.simps} to see that they define
47.21 +the same function. What is more, those equations are automatically declared as
47.22 simplification rules.
47.23
47.24 In general, Isabelle may not be able to prove all termination condition
47.25 @@ -32,8 +32,8 @@
47.26 lemma termi_lem[simp]: "\\<not> x \\<le> y \\<Longrightarrow> x - Suc y < x - y";
47.27
47.28 txt{*\noindent
47.29 -It was not proved automatically because of the special nature of \isa{-}
47.30 -on \isa{nat}. This requires more arithmetic than is tried by default:
47.31 +It was not proved automatically because of the special nature of @{text"-"}
47.32 +on @{typ"nat"}. This requires more arithmetic than is tried by default:
47.33 *}
47.34
47.35 by(arith);
47.36 @@ -49,8 +49,8 @@
47.37 "g(x,y) = (if x \\<le> y then x else g(x,y+1))";
47.38
47.39 text{*\noindent
47.40 -This time everything works fine. Now \isa{g.simps} contains precisely the
47.41 -stated recursion equation for \isa{g} and they are simplification
47.42 +This time everything works fine. Now @{thm[source]g.simps} contains precisely
47.43 +the stated recursion equation for @{term"g"} and they are simplification
47.44 rules. Thus we can automatically prove
47.45 *}
47.46
47.47 @@ -60,7 +60,7 @@
47.48 text{*\noindent
47.49 More exciting theorems require induction, which is discussed below.
47.50
47.51 -Because lemma \isa{termi_lem} above was only turned into a
47.52 +Because lemma @{thm[source]termi_lem} above was only turned into a
47.53 simplification rule for the sake of the termination proof, we may want to
47.54 disable it again:
47.55 *}
47.56 @@ -68,26 +68,27 @@
47.57 lemmas [simp del] = termi_lem;
47.58
47.59 text{*
47.60 -The attentive reader may wonder why we chose to call our function \isa{g}
47.61 -rather than \isa{f} the second time around. The reason is that, despite
47.62 -the failed termination proof, the definition of \isa{f} did not
47.63 -fail (and thus we could not define it a second time). However, all theorems
47.64 -about \isa{f}, for example \isa{f.simps}, carry as a precondition the
47.65 -unproved termination condition. Moreover, the theorems \isa{f.simps} are
47.66 -not simplification rules. However, this mechanism allows a delayed proof of
47.67 -termination: instead of proving \isa{termi_lem} up front, we could prove
47.68 +The attentive reader may wonder why we chose to call our function @{term"g"}
47.69 +rather than @{term"f"} the second time around. The reason is that, despite
47.70 +the failed termination proof, the definition of @{term"f"} did not
47.71 +fail, and thus we could not define it a second time. However, all theorems
47.72 +about @{term"f"}, for example @{thm[source]f.simps}, carry as a precondition
47.73 +the unproved termination condition. Moreover, the theorems
47.74 +@{thm[source]f.simps} are not simplification rules. However, this mechanism
47.75 +allows a delayed proof of termination: instead of proving
47.76 +@{thm[source]termi_lem} up front, we could prove
47.77 it later on and then use it to remove the preconditions from the theorems
47.78 -about \isa{f}. In most cases this is more cumbersome than proving things
47.79 -up front
47.80 +about @{term"f"}. In most cases this is more cumbersome than proving things
47.81 +up front.
47.82 %FIXME, with one exception: nested recursion.
47.83
47.84 Although all the above examples employ measure functions, \isacommand{recdef}
47.85 allows arbitrary wellfounded relations. For example, termination of
47.86 -Ackermann's function requires the lexicographic product \isa{<*lex*>}:
47.87 +Ackermann's function requires the lexicographic product @{text"<*lex*>"}:
47.88 *}
47.89
47.90 consts ack :: "nat*nat \\<Rightarrow> nat";
47.91 -recdef ack "measure(%m. m) <*lex*> measure(%n. n)"
47.92 +recdef ack "measure(\<lambda>m. m) <*lex*> measure(\<lambda>n. n)"
47.93 "ack(0,n) = Suc n"
47.94 "ack(Suc m,0) = ack(m, 1)"
47.95 "ack(Suc m,Suc n) = ack(m,ack(Suc m,n))";
48.1 --- a/doc-src/TutorialI/ToyList/ToyList.thy Fri Sep 01 18:29:52 2000 +0200
48.2 +++ b/doc-src/TutorialI/ToyList/ToyList.thy Fri Sep 01 19:09:44 2000 +0200
48.3 @@ -1,10 +1,10 @@
48.4 theory ToyList = PreList:
48.5
48.6 text{*\noindent
48.7 -HOL already has a predefined theory of lists called \isa{List} ---
48.8 -\isa{ToyList} is merely a small fragment of it chosen as an example. In
48.9 +HOL already has a predefined theory of lists called @{text"List"} ---
48.10 +@{text"ToyList"} is merely a small fragment of it chosen as an example. In
48.11 contrast to what is recommended in \S\ref{sec:Basic:Theories},
48.12 -\isa{ToyList} is not based on \isa{Main} but on \isa{PreList}, a
48.13 +@{text"ToyList"} is not based on @{text"Main"} but on @{text"PreList"}, a
48.14 theory that contains pretty much everything but lists, thus avoiding
48.15 ambiguities caused by defining lists twice.
48.16 *}
48.17 @@ -16,27 +16,28 @@
48.18 The datatype\index{*datatype} \isaindexbold{list} introduces two
48.19 constructors \isaindexbold{Nil} and \isaindexbold{Cons}, the
48.20 empty~list and the operator that adds an element to the front of a list. For
48.21 -example, the term \isa{Cons True (Cons False Nil)} is a value of type
48.22 -@{typ"bool list"}, namely the list with the elements \isa{True} and
48.23 -\isa{False}. Because this notation becomes unwieldy very quickly, the
48.24 +example, the term \isa{Cons True (Cons False Nil)} is a value of
48.25 +type @{typ"bool list"}, namely the list with the elements @{term"True"} and
48.26 +@{term"False"}. Because this notation becomes unwieldy very quickly, the
48.27 datatype declaration is annotated with an alternative syntax: instead of
48.28 -\isa{Nil} and \isa{Cons x xs} we can write
48.29 -\isa{[]}\index{$HOL2list@\texttt{[]}|bold} and
48.30 +@{term[source]Nil} and \isa{Cons x xs} we can write
48.31 +@{term"[]"}\index{$HOL2list@\texttt{[]}|bold} and
48.32 @{term"x # xs"}\index{$HOL2list@\texttt{\#}|bold}. In fact, this
48.33 alternative syntax is the standard syntax. Thus the list \isa{Cons True
48.34 (Cons False Nil)} becomes @{term"True # False # []"}. The annotation
48.35 -\isacommand{infixr}\indexbold{*infixr} means that \isa{\#} associates to
48.36 -the right, i.e.\ the term @{term"x # y # z"} is read as \isa{x
48.37 -\# (y \# z)} and not as \isa{(x \# y) \# z}.
48.38 +\isacommand{infixr}\indexbold{*infixr} means that @{text"#"} associates to
48.39 +the right, i.e.\ the term @{term"x # y # z"} is read as @{text"x # (y # z)"}
48.40 +and not as @{text"(x # y) # z"}.
48.41
48.42 \begin{warn}
48.43 Syntax annotations are a powerful but completely optional feature. You
48.44 - could drop them from theory \isa{ToyList} and go back to the identifiers
48.45 - \isa{Nil} and \isa{Cons}. However, lists are such a central datatype
48.46 + could drop them from theory @{text"ToyList"} and go back to the identifiers
48.47 + @{term[source]Nil} and @{term[source]Cons}. However, lists are such a
48.48 + central datatype
48.49 that their syntax is highly customized. We recommend that novices should
48.50 not use syntax annotations in their own theories.
48.51 \end{warn}
48.52 -Next, two functions \isa{app} and \isaindexbold{rev} are declared:
48.53 +Next, two functions @{text"app"} and \isaindexbold{rev} are declared:
48.54 *}
48.55
48.56 consts app :: "'a list \\<Rightarrow> 'a list \\<Rightarrow> 'a list" (infixr "@" 65)
48.57 @@ -47,7 +48,7 @@
48.58 In contrast to ML, Isabelle insists on explicit declarations of all functions
48.59 (keyword \isacommand{consts}). (Apart from the declaration-before-use
48.60 restriction, the order of items in a theory file is unconstrained.) Function
48.61 -\isa{app} is annotated with concrete syntax too. Instead of the prefix
48.62 +@{term"app"} is annotated with concrete syntax too. Instead of the prefix
48.63 syntax \isa{app xs ys} the infix
48.64 @{term"xs @ ys"}\index{$HOL2list@\texttt{\at}|bold} becomes the preferred
48.65 form. Both functions are defined recursively:
48.66 @@ -63,8 +64,8 @@
48.67
48.68 text{*
48.69 \noindent
48.70 -The equations for \isa{app} and \isa{rev} hardly need comments:
48.71 -\isa{app} appends two lists and \isa{rev} reverses a list. The keyword
48.72 +The equations for @{term"app"} and @{term"rev"} hardly need comments:
48.73 +@{term"app"} appends two lists and @{term"rev"} reverses a list. The keyword
48.74 \isacommand{primrec}\index{*primrec} indicates that the recursion is of a
48.75 particularly primitive kind where each recursive call peels off a datatype
48.76 constructor from one of the arguments. Thus the
48.77 @@ -114,7 +115,7 @@
48.78 illustrate not just the basic proof commands but also the typical proof
48.79 process.
48.80
48.81 -\subsubsection*{Main goal: \texttt{rev(rev xs) = xs}}
48.82 +\subsubsection*{Main goal: @{text"rev(rev xs) = xs"}}
48.83
48.84 Our goal is to show that reversing a list twice produces the original
48.85 list. The input line
48.86 @@ -125,14 +126,15 @@
48.87 txt{*\index{*theorem|bold}\index{*simp (attribute)|bold}
48.88 \begin{itemize}
48.89 \item
48.90 -establishes a new theorem to be proved, namely \isa{rev(rev xs) = xs},
48.91 +establishes a new theorem to be proved, namely @{prop"rev(rev xs) = xs"},
48.92 \item
48.93 -gives that theorem the name \isa{rev_rev} by which it can be referred to,
48.94 +gives that theorem the name @{text"rev_rev"} by which it can be
48.95 +referred to,
48.96 \item
48.97 -and tells Isabelle (via \isa{[simp]}) to use the theorem (once it has been
48.98 +and tells Isabelle (via @{text"[simp]"}) to use the theorem (once it has been
48.99 proved) as a simplification rule, i.e.\ all future proofs involving
48.100 -simplification will replace occurrences of \isa{rev(rev xs)} by
48.101 -\isa{xs}.
48.102 +simplification will replace occurrences of @{term"rev(rev xs)"} by
48.103 +@{term"xs"}.
48.104
48.105 The name and the simplification attribute are optional.
48.106 \end{itemize}
48.107 @@ -145,7 +147,7 @@
48.108 ~1.~rev~(rev~xs)~=~xs
48.109 \end{isabelle}
48.110 The first three lines tell us that we are 0 steps into the proof of
48.111 -theorem \isa{rev_rev}; for compactness reasons we rarely show these
48.112 +theorem @{text"rev_rev"}; for compactness reasons we rarely show these
48.113 initial lines in this tutorial. The remaining lines display the current
48.114 proof state.
48.115 Until we have finished a proof, the proof state always looks like this:
48.116 @@ -158,26 +160,26 @@
48.117 where $G$
48.118 is the overall goal that we are trying to prove, and the numbered lines
48.119 contain the subgoals $G\sb{1}$, \dots, $G\sb{n}$ that we need to prove to
48.120 -establish $G$. At \isa{step 0} there is only one subgoal, which is
48.121 +establish $G$. At @{text"step 0"} there is only one subgoal, which is
48.122 identical with the overall goal. Normally $G$ is constant and only serves as
48.123 a reminder. Hence we rarely show it in this tutorial.
48.124
48.125 -Let us now get back to \isa{rev(rev xs) = xs}. Properties of recursively
48.126 +Let us now get back to @{prop"rev(rev xs) = xs"}. Properties of recursively
48.127 defined functions are best established by induction. In this case there is
48.128 -not much choice except to induct on \isa{xs}:
48.129 +not much choice except to induct on @{term"xs"}:
48.130 *}
48.131
48.132 apply(induct_tac xs);
48.133
48.134 txt{*\noindent\index{*induct_tac}%
48.135 -This tells Isabelle to perform induction on variable \isa{xs}. The suffix
48.136 -\isa{tac} stands for ``tactic'', a synonym for ``theorem proving function''.
48.137 +This tells Isabelle to perform induction on variable @{term"xs"}. The suffix
48.138 +@{term"tac"} stands for ``tactic'', a synonym for ``theorem proving function''.
48.139 By default, induction acts on the first subgoal. The new proof state contains
48.140 -two subgoals, namely the base case (\isa{Nil}) and the induction step
48.141 -(\isa{Cons}):
48.142 +two subgoals, namely the base case (@{term[source]Nil}) and the induction step
48.143 +(@{term[source]Cons}):
48.144 \begin{isabelle}
48.145 ~1.~rev~(rev~[])~=~[]\isanewline
48.146 -~2.~{\isasymAnd}a~list.~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev(a~\#~list))~=~a~\#~list%
48.147 +~2.~{\isasymAnd}a~list.~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev(a~\#~list))~=~a~\#~list
48.148 \end{isabelle}
48.149
48.150 The induction step is an example of the general format of a subgoal:
48.151 @@ -191,12 +193,11 @@
48.152 conclusion} is the actual proposition to be proved. Typical proof steps
48.153 that add new assumptions are induction or case distinction. In our example
48.154 the only assumption is the induction hypothesis @{term"rev (rev list) =
48.155 - list"}, where \isa{list} is a variable name chosen by Isabelle. If there
48.156 + list"}, where @{term"list"} is a variable name chosen by Isabelle. If there
48.157 are multiple assumptions, they are enclosed in the bracket pair
48.158 \indexboldpos{\isasymlbrakk}{$Isabrl} and
48.159 \indexboldpos{\isasymrbrakk}{$Isabrr} and separated by semicolons.
48.160
48.161 -%FIXME indent!
48.162 Let us try to solve both goals automatically:
48.163 *}
48.164
48.165 @@ -204,9 +205,9 @@
48.166
48.167 txt{*\noindent
48.168 This command tells Isabelle to apply a proof strategy called
48.169 -\isa{auto} to all subgoals. Essentially, \isa{auto} tries to
48.170 +@{text"auto"} to all subgoals. Essentially, @{text"auto"} tries to
48.171 ``simplify'' the subgoals. In our case, subgoal~1 is solved completely (thanks
48.172 -to the equation \isa{rev [] = []}) and disappears; the simplified version
48.173 +to the equation @{prop"rev [] = []"}) and disappears; the simplified version
48.174 of subgoal~2 becomes the new subgoal~1:
48.175 \begin{isabelle}
48.176 ~1.~\dots~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev~list~@~a~\#~[])~=~a~\#~list
48.177 @@ -217,7 +218,7 @@
48.178 oops
48.179 (*>*)
48.180
48.181 -subsubsection{*First lemma: \texttt{rev(xs \at~ys) = (rev ys) \at~(rev xs)}*}
48.182 +subsubsection{*First lemma: @{text"rev(xs @ ys) = (rev ys) @ (rev xs)"}*}
48.183
48.184 text{*
48.185 After abandoning the above proof attempt\indexbold{abandon
48.186 @@ -233,9 +234,9 @@
48.187 \emph{theorem}\index{theorem} and \emph{lemma}\index{lemma} pretty much
48.188 interchangeably.
48.189
48.190 -There are two variables that we could induct on: \isa{xs} and
48.191 -\isa{ys}. Because \isa{\at} is defined by recursion on
48.192 -the first argument, \isa{xs} is the correct one:
48.193 +There are two variables that we could induct on: @{term"xs"} and
48.194 +@{term"ys"}. Because @{text"@"} is defined by recursion on
48.195 +the first argument, @{term"xs"} is the correct one:
48.196 *}
48.197
48.198 apply(induct_tac xs);
48.199 @@ -259,7 +260,7 @@
48.200 oops
48.201 (*>*)
48.202
48.203 -subsubsection{*Second lemma: \texttt{xs \at~[] = xs}*}
48.204 +subsubsection{*Second lemma: @{text"xs @ [] = xs"}*}
48.205
48.206 text{*
48.207 This time the canonical proof procedure
48.208 @@ -271,7 +272,7 @@
48.209
48.210 txt{*
48.211 \noindent
48.212 -leads to the desired message \isa{No subgoals!}:
48.213 +leads to the desired message @{text"No subgoals!"}:
48.214 \begin{isabelle}
48.215 xs~@~[]~=~xs\isanewline
48.216 No~subgoals!
48.217 @@ -284,12 +285,12 @@
48.218
48.219 text{*\noindent\indexbold{$Isar@\texttt{.}}%
48.220 As a result of that final dot, Isabelle associates the lemma
48.221 -just proved with its name. Notice that in the lemma \isa{app_Nil2} (as
48.222 -printed out after the final dot) the free variable \isa{xs} has been
48.223 -replaced by the unknown \isa{?xs}, just as explained in
48.224 -\S\ref{sec:variables}. Note that instead of instead of \isacommand{apply}
48.225 +just proved with its name. Instead of \isacommand{apply}
48.226 followed by a dot, you can simply write \isacommand{by}\indexbold{by},
48.227 -which we do most of the time.
48.228 +which we do most of the time. Notice that in lemma @{thm[source]app_Nil2}
48.229 +(as printed out after the final dot) the free variable @{term"xs"} has been
48.230 +replaced by the unknown @{text"?xs"}, just as explained in
48.231 +\S\ref{sec:variables}.
48.232
48.233 Going back to the proof of the first lemma
48.234 *}
48.235 @@ -300,24 +301,24 @@
48.236
48.237 txt{*
48.238 \noindent
48.239 -we find that this time \isa{auto} solves the base case, but the
48.240 +we find that this time @{text"auto"} solves the base case, but the
48.241 induction step merely simplifies to
48.242 \begin{isabelle}
48.243 ~1.~{\isasymAnd}a~list.\isanewline
48.244 ~~~~~~~rev~(list~@~ys)~=~rev~ys~@~rev~list~{\isasymLongrightarrow}\isanewline
48.245 ~~~~~~~(rev~ys~@~rev~list)~@~a~\#~[]~=~rev~ys~@~rev~list~@~a~\#~[]
48.246 \end{isabelle}
48.247 -Now we need to remember that \isa{\at} associates to the right, and that
48.248 -\isa{\#} and \isa{\at} have the same priority (namely the \isa{65}
48.249 +Now we need to remember that @{text"@"} associates to the right, and that
48.250 +@{text"#"} and @{text"@"} have the same priority (namely the @{text"65"}
48.251 in their \isacommand{infixr} annotation). Thus the conclusion really is
48.252 \begin{isabelle}
48.253 -~~~~~(rev~ys~@~rev~list)~@~(a~\#~[])~=~rev~ys~@~(rev~list~@~(a~\#~[]))%
48.254 +~~~~~(rev~ys~@~rev~list)~@~(a~\#~[])~=~rev~ys~@~(rev~list~@~(a~\#~[]))
48.255 \end{isabelle}
48.256 -and the missing lemma is associativity of \isa{\at}.
48.257 +and the missing lemma is associativity of @{text"@"}.
48.258 *}
48.259 (*<*)oops(*>*)
48.260
48.261 -subsubsection{*Third lemma: \texttt{(xs \at~ys) \at~zs = xs \at~(ys \at~zs)}*}
48.262 +subsubsection{*Third lemma: @{text"(xs @ ys) @ zs = xs @ (ys @ zs)"}*}
48.263
48.264 text{*
48.265 Abandoning the previous proof, the canonical proof procedure
48.266 @@ -346,7 +347,7 @@
48.267 by(auto);
48.268
48.269 text{*\noindent
48.270 -The final \isa{end} tells Isabelle to close the current theory because
48.271 +The final \isacommand{end} tells Isabelle to close the current theory because
48.272 we are finished with its development:
48.273 *}
48.274
49.1 --- a/doc-src/TutorialI/ToyList/document/ToyList.tex Fri Sep 01 18:29:52 2000 +0200
49.2 +++ b/doc-src/TutorialI/ToyList/document/ToyList.tex Fri Sep 01 19:09:44 2000 +0200
49.3 @@ -17,23 +17,24 @@
49.4 The datatype\index{*datatype} \isaindexbold{list} introduces two
49.5 constructors \isaindexbold{Nil} and \isaindexbold{Cons}, the
49.6 empty~list and the operator that adds an element to the front of a list. For
49.7 -example, the term \isa{Cons True (Cons False Nil)} is a value of type
49.8 -\isa{bool\ list}, namely the list with the elements \isa{True} and
49.9 +example, the term \isa{Cons True (Cons False Nil)} is a value of
49.10 +type \isa{bool\ list}, namely the list with the elements \isa{True} and
49.11 \isa{False}. Because this notation becomes unwieldy very quickly, the
49.12 datatype declaration is annotated with an alternative syntax: instead of
49.13 \isa{Nil} and \isa{Cons x xs} we can write
49.14 -\isa{[]}\index{$HOL2list@\texttt{[]}|bold} and
49.15 -\isa{\mbox{x}\ {\isacharhash}\ \mbox{xs}}\index{$HOL2list@\texttt{\#}|bold}. In fact, this
49.16 +\isa{{\isacharbrackleft}{\isacharbrackright}}\index{$HOL2list@\texttt{[]}|bold} and
49.17 +\isa{x\ {\isacharhash}\ xs}\index{$HOL2list@\texttt{\#}|bold}. In fact, this
49.18 alternative syntax is the standard syntax. Thus the list \isa{Cons True
49.19 (Cons False Nil)} becomes \isa{True\ {\isacharhash}\ False\ {\isacharhash}\ {\isacharbrackleft}{\isacharbrackright}}. The annotation
49.20 -\isacommand{infixr}\indexbold{*infixr} means that \isa{\#} associates to
49.21 -the right, i.e.\ the term \isa{\mbox{x}\ {\isacharhash}\ \mbox{y}\ {\isacharhash}\ \mbox{z}} is read as \isa{x
49.22 -\# (y \# z)} and not as \isa{(x \# y) \# z}.
49.23 +\isacommand{infixr}\indexbold{*infixr} means that \isa{{\isacharhash}} associates to
49.24 +the right, i.e.\ the term \isa{x\ {\isacharhash}\ y\ {\isacharhash}\ z} is read as \isa{x\ {\isacharhash}\ {\isacharparenleft}y\ {\isacharhash}\ z{\isacharparenright}}
49.25 +and not as \isa{{\isacharparenleft}x\ {\isacharhash}\ y{\isacharparenright}\ {\isacharhash}\ z}.
49.26
49.27 \begin{warn}
49.28 Syntax annotations are a powerful but completely optional feature. You
49.29 could drop them from theory \isa{ToyList} and go back to the identifiers
49.30 - \isa{Nil} and \isa{Cons}. However, lists are such a central datatype
49.31 + \isa{Nil} and \isa{Cons}. However, lists are such a
49.32 + central datatype
49.33 that their syntax is highly customized. We recommend that novices should
49.34 not use syntax annotations in their own theories.
49.35 \end{warn}
49.36 @@ -46,9 +47,9 @@
49.37 In contrast to ML, Isabelle insists on explicit declarations of all functions
49.38 (keyword \isacommand{consts}). (Apart from the declaration-before-use
49.39 restriction, the order of items in a theory file is unconstrained.) Function
49.40 -\isa{app} is annotated with concrete syntax too. Instead of the prefix
49.41 +\isa{op\ {\isacharat}} is annotated with concrete syntax too. Instead of the prefix
49.42 syntax \isa{app xs ys} the infix
49.43 -\isa{\mbox{xs}\ {\isacharat}\ \mbox{ys}}\index{$HOL2list@\texttt{\at}|bold} becomes the preferred
49.44 +\isa{xs\ {\isacharat}\ ys}\index{$HOL2list@\texttt{\at}|bold} becomes the preferred
49.45 form. Both functions are defined recursively:%
49.46 \end{isamarkuptext}%
49.47 \isacommand{primrec}\isanewline
49.48 @@ -60,8 +61,8 @@
49.49 {\isachardoublequote}rev\ {\isacharparenleft}x\ {\isacharhash}\ xs{\isacharparenright}\ \ {\isacharequal}\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharat}\ {\isacharparenleft}x\ {\isacharhash}\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}{\isachardoublequote}%
49.50 \begin{isamarkuptext}%
49.51 \noindent
49.52 -The equations for \isa{app} and \isa{rev} hardly need comments:
49.53 -\isa{app} appends two lists and \isa{rev} reverses a list. The keyword
49.54 +The equations for \isa{op\ {\isacharat}} and \isa{rev} hardly need comments:
49.55 +\isa{op\ {\isacharat}} appends two lists and \isa{rev} reverses a list. The keyword
49.56 \isacommand{primrec}\index{*primrec} indicates that the recursion is of a
49.57 particularly primitive kind where each recursive call peels off a datatype
49.58 constructor from one of the arguments. Thus the
49.59 @@ -110,7 +111,7 @@
49.60 illustrate not just the basic proof commands but also the typical proof
49.61 process.
49.62
49.63 -\subsubsection*{Main goal: \texttt{rev(rev xs) = xs}}
49.64 +\subsubsection*{Main goal: \isa{rev{\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs}}
49.65
49.66 Our goal is to show that reversing a list twice produces the original
49.67 list. The input line%
49.68 @@ -120,13 +121,14 @@
49.69 \index{*theorem|bold}\index{*simp (attribute)|bold}
49.70 \begin{itemize}
49.71 \item
49.72 -establishes a new theorem to be proved, namely \isa{rev(rev xs) = xs},
49.73 +establishes a new theorem to be proved, namely \isa{rev\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs},
49.74 \item
49.75 -gives that theorem the name \isa{rev_rev} by which it can be referred to,
49.76 +gives that theorem the name \isa{rev{\isacharunderscore}rev} by which it can be
49.77 +referred to,
49.78 \item
49.79 -and tells Isabelle (via \isa{[simp]}) to use the theorem (once it has been
49.80 +and tells Isabelle (via \isa{{\isacharbrackleft}simp{\isacharbrackright}}) to use the theorem (once it has been
49.81 proved) as a simplification rule, i.e.\ all future proofs involving
49.82 -simplification will replace occurrences of \isa{rev(rev xs)} by
49.83 +simplification will replace occurrences of \isa{rev\ {\isacharparenleft}rev\ xs{\isacharparenright}} by
49.84 \isa{xs}.
49.85
49.86 The name and the simplification attribute are optional.
49.87 @@ -140,7 +142,7 @@
49.88 ~1.~rev~(rev~xs)~=~xs
49.89 \end{isabelle}
49.90 The first three lines tell us that we are 0 steps into the proof of
49.91 -theorem \isa{rev_rev}; for compactness reasons we rarely show these
49.92 +theorem \isa{rev{\isacharunderscore}rev}; for compactness reasons we rarely show these
49.93 initial lines in this tutorial. The remaining lines display the current
49.94 proof state.
49.95 Until we have finished a proof, the proof state always looks like this:
49.96 @@ -153,11 +155,11 @@
49.97 where $G$
49.98 is the overall goal that we are trying to prove, and the numbered lines
49.99 contain the subgoals $G\sb{1}$, \dots, $G\sb{n}$ that we need to prove to
49.100 -establish $G$. At \isa{step 0} there is only one subgoal, which is
49.101 +establish $G$. At \isa{step\ \isadigit{0}} there is only one subgoal, which is
49.102 identical with the overall goal. Normally $G$ is constant and only serves as
49.103 a reminder. Hence we rarely show it in this tutorial.
49.104
49.105 -Let us now get back to \isa{rev(rev xs) = xs}. Properties of recursively
49.106 +Let us now get back to \isa{rev\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs}. Properties of recursively
49.107 defined functions are best established by induction. In this case there is
49.108 not much choice except to induct on \isa{xs}:%
49.109 \end{isamarkuptxt}%
49.110 @@ -171,7 +173,7 @@
49.111 (\isa{Cons}):
49.112 \begin{isabelle}
49.113 ~1.~rev~(rev~[])~=~[]\isanewline
49.114 -~2.~{\isasymAnd}a~list.~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev(a~\#~list))~=~a~\#~list%
49.115 +~2.~{\isasymAnd}a~list.~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev(a~\#~list))~=~a~\#~list
49.116 \end{isabelle}
49.117
49.118 The induction step is an example of the general format of a subgoal:
49.119 @@ -184,12 +186,11 @@
49.120 The {\it assumptions} are the local assumptions for this subgoal and {\it
49.121 conclusion} is the actual proposition to be proved. Typical proof steps
49.122 that add new assumptions are induction or case distinction. In our example
49.123 -the only assumption is the induction hypothesis \isa{rev\ {\isacharparenleft}rev\ \mbox{list}{\isacharparenright}\ {\isacharequal}\ \mbox{list}}, where \isa{list} is a variable name chosen by Isabelle. If there
49.124 +the only assumption is the induction hypothesis \isa{rev\ {\isacharparenleft}rev\ list{\isacharparenright}\ {\isacharequal}\ list}, where \isa{list} is a variable name chosen by Isabelle. If there
49.125 are multiple assumptions, they are enclosed in the bracket pair
49.126 \indexboldpos{\isasymlbrakk}{$Isabrl} and
49.127 \indexboldpos{\isasymrbrakk}{$Isabrr} and separated by semicolons.
49.128
49.129 -%FIXME indent!
49.130 Let us try to solve both goals automatically:%
49.131 \end{isamarkuptxt}%
49.132 \isacommand{apply}{\isacharparenleft}auto{\isacharparenright}%
49.133 @@ -198,7 +199,7 @@
49.134 This command tells Isabelle to apply a proof strategy called
49.135 \isa{auto} to all subgoals. Essentially, \isa{auto} tries to
49.136 ``simplify'' the subgoals. In our case, subgoal~1 is solved completely (thanks
49.137 -to the equation \isa{rev [] = []}) and disappears; the simplified version
49.138 +to the equation \isa{rev\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}}) and disappears; the simplified version
49.139 of subgoal~2 becomes the new subgoal~1:
49.140 \begin{isabelle}
49.141 ~1.~\dots~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev~list~@~a~\#~[])~=~a~\#~list
49.142 @@ -206,7 +207,7 @@
49.143 In order to simplify this subgoal further, a lemma suggests itself.%
49.144 \end{isamarkuptxt}%
49.145 %
49.146 -\isamarkupsubsubsection{First lemma: \texttt{rev(xs \at~ys) = (rev ys) \at~(rev xs)}}
49.147 +\isamarkupsubsubsection{First lemma: \isa{rev{\isacharparenleft}xs\ {\isacharat}\ ys{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}rev\ ys{\isacharparenright}\ {\isacharat}\ {\isacharparenleft}rev\ xs{\isacharparenright}}}
49.148 %
49.149 \begin{isamarkuptext}%
49.150 After abandoning the above proof attempt\indexbold{abandon
49.151 @@ -222,7 +223,7 @@
49.152 interchangeably.
49.153
49.154 There are two variables that we could induct on: \isa{xs} and
49.155 -\isa{ys}. Because \isa{\at} is defined by recursion on
49.156 +\isa{ys}. Because \isa{{\isacharat}} is defined by recursion on
49.157 the first argument, \isa{xs} is the correct one:%
49.158 \end{isamarkuptxt}%
49.159 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ xs{\isacharparenright}%
49.160 @@ -241,7 +242,7 @@
49.161 the proof of a lemma usually remains implicit.%
49.162 \end{isamarkuptxt}%
49.163 %
49.164 -\isamarkupsubsubsection{Second lemma: \texttt{xs \at~[] = xs}}
49.165 +\isamarkupsubsubsection{Second lemma: \isa{xs\ {\isacharat}\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ xs}}
49.166 %
49.167 \begin{isamarkuptext}%
49.168 This time the canonical proof procedure%
49.169 @@ -251,7 +252,7 @@
49.170 \isacommand{apply}{\isacharparenleft}auto{\isacharparenright}%
49.171 \begin{isamarkuptxt}%
49.172 \noindent
49.173 -leads to the desired message \isa{No subgoals!}:
49.174 +leads to the desired message \isa{No\ subgoals{\isacharbang}}:
49.175 \begin{isabelle}
49.176 xs~@~[]~=~xs\isanewline
49.177 No~subgoals!
49.178 @@ -263,12 +264,12 @@
49.179 \begin{isamarkuptext}%
49.180 \noindent\indexbold{$Isar@\texttt{.}}%
49.181 As a result of that final dot, Isabelle associates the lemma
49.182 -just proved with its name. Notice that in the lemma \isa{app_Nil2} (as
49.183 -printed out after the final dot) the free variable \isa{xs} has been
49.184 -replaced by the unknown \isa{?xs}, just as explained in
49.185 -\S\ref{sec:variables}. Note that instead of instead of \isacommand{apply}
49.186 +just proved with its name. Instead of \isacommand{apply}
49.187 followed by a dot, you can simply write \isacommand{by}\indexbold{by},
49.188 -which we do most of the time.
49.189 +which we do most of the time. Notice that in lemma \isa{app{\isacharunderscore}Nil\isadigit{2}}
49.190 +(as printed out after the final dot) the free variable \isa{xs} has been
49.191 +replaced by the unknown \isa{{\isacharquery}xs}, just as explained in
49.192 +\S\ref{sec:variables}.
49.193
49.194 Going back to the proof of the first lemma%
49.195 \end{isamarkuptext}%
49.196 @@ -284,16 +285,16 @@
49.197 ~~~~~~~rev~(list~@~ys)~=~rev~ys~@~rev~list~{\isasymLongrightarrow}\isanewline
49.198 ~~~~~~~(rev~ys~@~rev~list)~@~a~\#~[]~=~rev~ys~@~rev~list~@~a~\#~[]
49.199 \end{isabelle}
49.200 -Now we need to remember that \isa{\at} associates to the right, and that
49.201 -\isa{\#} and \isa{\at} have the same priority (namely the \isa{65}
49.202 +Now we need to remember that \isa{{\isacharat}} associates to the right, and that
49.203 +\isa{{\isacharhash}} and \isa{{\isacharat}} have the same priority (namely the \isa{\isadigit{6}\isadigit{5}}
49.204 in their \isacommand{infixr} annotation). Thus the conclusion really is
49.205 \begin{isabelle}
49.206 -~~~~~(rev~ys~@~rev~list)~@~(a~\#~[])~=~rev~ys~@~(rev~list~@~(a~\#~[]))%
49.207 +~~~~~(rev~ys~@~rev~list)~@~(a~\#~[])~=~rev~ys~@~(rev~list~@~(a~\#~[]))
49.208 \end{isabelle}
49.209 -and the missing lemma is associativity of \isa{\at}.%
49.210 +and the missing lemma is associativity of \isa{{\isacharat}}.%
49.211 \end{isamarkuptxt}%
49.212 %
49.213 -\isamarkupsubsubsection{Third lemma: \texttt{(xs \at~ys) \at~zs = xs \at~(ys \at~zs)}}
49.214 +\isamarkupsubsubsection{Third lemma: \isa{{\isacharparenleft}xs\ {\isacharat}\ ys{\isacharparenright}\ {\isacharat}\ zs\ {\isacharequal}\ xs\ {\isacharat}\ {\isacharparenleft}ys\ {\isacharat}\ zs{\isacharparenright}}}
49.215 %
49.216 \begin{isamarkuptext}%
49.217 Abandoning the previous proof, the canonical proof procedure%
49.218 @@ -318,7 +319,7 @@
49.219 \isacommand{by}{\isacharparenleft}auto{\isacharparenright}%
49.220 \begin{isamarkuptext}%
49.221 \noindent
49.222 -The final \isa{end} tells Isabelle to close the current theory because
49.223 +The final \isacommand{end} tells Isabelle to close the current theory because
49.224 we are finished with its development:%
49.225 \end{isamarkuptext}%
49.226 \isacommand{end}\isanewline
50.1 --- a/doc-src/TutorialI/Trie/Trie.thy Fri Sep 01 18:29:52 2000 +0200
50.2 +++ b/doc-src/TutorialI/Trie/Trie.thy Fri Sep 01 19:09:44 2000 +0200
50.3 @@ -5,8 +5,8 @@
50.4 To minimize running time, each node of a trie should contain an array that maps
50.5 letters to subtries. We have chosen a (sometimes) more space efficient
50.6 representation where the subtries are held in an association list, i.e.\ a
50.7 -list of (letter,trie) pairs. Abstracting over the alphabet \isa{'a} and the
50.8 -values \isa{'v} we define a trie as follows:
50.9 +list of (letter,trie) pairs. Abstracting over the alphabet @{typ"'a"} and the
50.10 +values @{typ"'v"} we define a trie as follows:
50.11 *};
50.12
50.13 datatype ('a,'v)trie = Trie "'v option" "('a * ('a,'v)trie)list";
50.14 @@ -47,7 +47,7 @@
50.15
50.16 text{*
50.17 As a first simple property we prove that looking up a string in the empty
50.18 -trie \isa{Trie~None~[]} always returns \isa{None}. The proof merely
50.19 +trie @{term"Trie None []"} always returns @{term"None"}. The proof merely
50.20 distinguishes the two cases whether the search string is empty or not:
50.21 *};
50.22
50.23 @@ -70,14 +70,14 @@
50.24
50.25 text{*\noindent
50.26 The base case is obvious. In the recursive case the subtrie
50.27 -\isa{tt} associated with the first letter \isa{a} is extracted,
50.28 +@{term"tt"} associated with the first letter @{term"a"} is extracted,
50.29 recursively updated, and then placed in front of the association list.
50.30 -The old subtrie associated with \isa{a} is still in the association list
50.31 -but no longer accessible via \isa{assoc}. Clearly, there is room here for
50.32 +The old subtrie associated with @{term"a"} is still in the association list
50.33 +but no longer accessible via @{term"assoc"}. Clearly, there is room here for
50.34 optimizations!
50.35
50.36 -Before we start on any proofs about \isa{update} we tell the simplifier to
50.37 -expand all \isa{let}s and to split all \isa{case}-constructs over
50.38 +Before we start on any proofs about @{term"update"} we tell the simplifier to
50.39 +expand all @{text"let"}s and to split all @{text"case"}-constructs over
50.40 options:
50.41 *};
50.42
50.43 @@ -86,23 +86,23 @@
50.44
50.45 text{*\noindent
50.46 The reason becomes clear when looking (probably after a failed proof
50.47 -attempt) at the body of \isa{update}: it contains both
50.48 -\isa{let} and a case distinction over type \isa{option}.
50.49 +attempt) at the body of @{term"update"}: it contains both
50.50 +@{text"let"} and a case distinction over type @{text"option"}.
50.51
50.52 -Our main goal is to prove the correct interaction of \isa{update} and
50.53 -\isa{lookup}:
50.54 +Our main goal is to prove the correct interaction of @{term"update"} and
50.55 +@{term"lookup"}:
50.56 *};
50.57
50.58 theorem "\\<forall>t v bs. lookup (update t as v) bs =
50.59 (if as=bs then Some v else lookup t bs)";
50.60
50.61 txt{*\noindent
50.62 -Our plan is to induct on \isa{as}; hence the remaining variables are
50.63 +Our plan is to induct on @{term"as"}; hence the remaining variables are
50.64 quantified. From the definitions it is clear that induction on either
50.65 -\isa{as} or \isa{bs} is required. The choice of \isa{as} is merely
50.66 -guided by the intuition that simplification of \isa{lookup} might be easier
50.67 -if \isa{update} has already been simplified, which can only happen if
50.68 -\isa{as} is instantiated.
50.69 +@{term"as"} or @{term"bs"} is required. The choice of @{term"as"} is merely
50.70 +guided by the intuition that simplification of @{term"lookup"} might be easier
50.71 +if @{term"update"} has already been simplified, which can only happen if
50.72 +@{term"as"} is instantiated.
50.73 The start of the proof is completely conventional:
50.74 *};
50.75 apply(induct_tac as, auto);
50.76 @@ -112,27 +112,27 @@
50.77 \begin{isabelle}
50.78 ~1.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs\isanewline
50.79 ~2.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs\isanewline
50.80 -~3.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs%
50.81 +~3.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs
50.82 \end{isabelle}
50.83 -Clearly, if we want to make headway we have to instantiate \isa{bs} as
50.84 +Clearly, if we want to make headway we have to instantiate @{term"bs"} as
50.85 well now. It turns out that instead of induction, case distinction
50.86 suffices:
50.87 *};
50.88 by(case_tac[!] bs, auto);
50.89
50.90 text{*\noindent
50.91 -All methods ending in \isa{tac} take an optional first argument that
50.92 -specifies the range of subgoals they are applied to, where \isa{[!]} means all
50.93 -subgoals, i.e.\ \isa{[1-3]} in our case. Individual subgoal numbers,
50.94 -e.g. \isa{[2]} are also allowed.
50.95 +All methods ending in @{text"tac"} take an optional first argument that
50.96 +specifies the range of subgoals they are applied to, where @{text"[!]"} means
50.97 +all subgoals, i.e.\ @{text"[1-3]"} in our case. Individual subgoal numbers,
50.98 +e.g. @{text"[2]"} are also allowed.
50.99
50.100 This proof may look surprisingly straightforward. However, note that this
50.101 -comes at a cost: the proof script is unreadable because the
50.102 -intermediate proof states are invisible, and we rely on the (possibly
50.103 -brittle) magic of \isa{auto} (\isa{simp\_all} will not do---try it) to split the subgoals
50.104 -of the induction up in such a way that case distinction on \isa{bs} makes sense and
50.105 -solves the proof. Part~\ref{Isar} shows you how to write readable and stable
50.106 -proofs.
50.107 +comes at a cost: the proof script is unreadable because the intermediate
50.108 +proof states are invisible, and we rely on the (possibly brittle) magic of
50.109 +@{text"auto"} (@{text"simp_all"} will not do---try it) to split the subgoals
50.110 +of the induction up in such a way that case distinction on @{term"bs"} makes
50.111 +sense and solves the proof. Part~\ref{Isar} shows you how to write readable
50.112 +and stable proofs.
50.113 *};
50.114
50.115 (*<*)
51.1 --- a/doc-src/TutorialI/Trie/document/Trie.tex Fri Sep 01 18:29:52 2000 +0200
51.2 +++ b/doc-src/TutorialI/Trie/document/Trie.tex Fri Sep 01 19:09:44 2000 +0200
51.3 @@ -5,8 +5,8 @@
51.4 To minimize running time, each node of a trie should contain an array that maps
51.5 letters to subtries. We have chosen a (sometimes) more space efficient
51.6 representation where the subtries are held in an association list, i.e.\ a
51.7 -list of (letter,trie) pairs. Abstracting over the alphabet \isa{'a} and the
51.8 -values \isa{'v} we define a trie as follows:%
51.9 +list of (letter,trie) pairs. Abstracting over the alphabet \isa{{\isacharprime}a} and the
51.10 +values \isa{{\isacharprime}v} we define a trie as follows:%
51.11 \end{isamarkuptext}%
51.12 \isacommand{datatype}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}v{\isacharparenright}trie\ {\isacharequal}\ Trie\ \ {\isachardoublequote}{\isacharprime}v\ option{\isachardoublequote}\ \ {\isachardoublequote}{\isacharparenleft}{\isacharprime}a\ {\isacharasterisk}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}v{\isacharparenright}trie{\isacharparenright}list{\isachardoublequote}%
51.13 \begin{isamarkuptext}%
51.14 @@ -41,7 +41,7 @@
51.15 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\isacharbar}\ Some\ at\ {\isasymRightarrow}\ lookup\ at\ as{\isacharparenright}{\isachardoublequote}%
51.16 \begin{isamarkuptext}%
51.17 As a first simple property we prove that looking up a string in the empty
51.18 -trie \isa{Trie~None~[]} always returns \isa{None}. The proof merely
51.19 +trie \isa{Trie\ None\ {\isacharbrackleft}{\isacharbrackright}} always returns \isa{None}. The proof merely
51.20 distinguishes the two cases whether the search string is empty or not:%
51.21 \end{isamarkuptext}%
51.22 \isacommand{lemma}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}lookup\ {\isacharparenleft}Trie\ None\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}\ as\ {\isacharequal}\ None{\isachardoublequote}\isanewline
51.23 @@ -101,7 +101,7 @@
51.24 \begin{isabelle}
51.25 ~1.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs\isanewline
51.26 ~2.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs\isanewline
51.27 -~3.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs%
51.28 +~3.~\dots~{\isasymLongrightarrow}~lookup~\dots~bs~=~lookup~t~bs
51.29 \end{isabelle}
51.30 Clearly, if we want to make headway we have to instantiate \isa{bs} as
51.31 well now. It turns out that instead of induction, case distinction
51.32 @@ -111,17 +111,17 @@
51.33 \begin{isamarkuptext}%
51.34 \noindent
51.35 All methods ending in \isa{tac} take an optional first argument that
51.36 -specifies the range of subgoals they are applied to, where \isa{[!]} means all
51.37 -subgoals, i.e.\ \isa{[1-3]} in our case. Individual subgoal numbers,
51.38 -e.g. \isa{[2]} are also allowed.
51.39 +specifies the range of subgoals they are applied to, where \isa{{\isacharbrackleft}{\isacharbang}{\isacharbrackright}} means
51.40 +all subgoals, i.e.\ \isa{{\isacharbrackleft}\isadigit{1}{\isacharminus}\isadigit{3}{\isacharbrackright}} in our case. Individual subgoal numbers,
51.41 +e.g. \isa{{\isacharbrackleft}\isadigit{2}{\isacharbrackright}} are also allowed.
51.42
51.43 This proof may look surprisingly straightforward. However, note that this
51.44 -comes at a cost: the proof script is unreadable because the
51.45 -intermediate proof states are invisible, and we rely on the (possibly
51.46 -brittle) magic of \isa{auto} (\isa{simp\_all} will not do---try it) to split the subgoals
51.47 -of the induction up in such a way that case distinction on \isa{bs} makes sense and
51.48 -solves the proof. Part~\ref{Isar} shows you how to write readable and stable
51.49 -proofs.%
51.50 +comes at a cost: the proof script is unreadable because the intermediate
51.51 +proof states are invisible, and we rely on the (possibly brittle) magic of
51.52 +\isa{auto} (\isa{simp{\isacharunderscore}all} will not do---try it) to split the subgoals
51.53 +of the induction up in such a way that case distinction on \isa{bs} makes
51.54 +sense and solves the proof. Part~\ref{Isar} shows you how to write readable
51.55 +and stable proofs.%
51.56 \end{isamarkuptext}%
51.57 \end{isabellebody}%
51.58 %%% Local Variables:
52.1 --- a/doc-src/TutorialI/basics.tex Fri Sep 01 18:29:52 2000 +0200
52.2 +++ b/doc-src/TutorialI/basics.tex Fri Sep 01 19:09:44 2000 +0200
52.3 @@ -71,7 +71,7 @@
52.4 proofs are in separate ML files.
52.5
52.6 \begin{warn}
52.7 - HOL contains a theory \ttindexbold{Main}, the union of all the basic
52.8 + HOL contains a theory \isaindexbold{Main}, the union of all the basic
52.9 predefined theories like arithmetic, lists, sets, etc.\ (see the online
52.10 library). Unless you know what you are doing, always include \texttt{Main}
52.11 as a direct or indirect parent theory of all your theories.
52.12 @@ -115,7 +115,7 @@
52.13 called \bfindex{type inference}) and keeps quiet about it. Occasionally
52.14 this may lead to misunderstandings between you and the system. If anything
52.15 strange happens, we recommend to set the \rmindex{flag}
52.16 - \ttindexbold{show_types} that tells Isabelle to display type information
52.17 + \isaindexbold{show_types} that tells Isabelle to display type information
52.18 that is usually suppressed: simply type
52.19 \begin{ttbox}
52.20 ML "set show_types"
52.21 @@ -198,7 +198,7 @@
52.22 input, you can see which parentheses are dropped---they were superfluous. If
52.23 you are unsure how to interpret Isabelle's output because you don't know
52.24 where the (dropped) parentheses go, set (and possibly reset) the \rmindex{flag}
52.25 -\ttindexbold{show_brackets}:
52.26 +\isaindexbold{show_brackets}:
52.27 \begin{ttbox}
52.28 ML "set show_brackets"; \(\dots\); ML "reset show_brackets";
52.29 \end{ttbox}
53.1 --- a/doc-src/TutorialI/fp.tex Fri Sep 01 18:29:52 2000 +0200
53.2 +++ b/doc-src/TutorialI/fp.tex Fri Sep 01 19:09:44 2000 +0200
53.3 @@ -508,16 +508,7 @@
53.4 $f$~\isasymequiv~\isasymlambda$x\,y.\;t$.
53.5 \end{warn}
53.6
53.7 -\subsubsection{Simplifying let-expressions}
53.8 -\index{simplification!of let-expressions}
53.9 -
53.10 -Proving a goal containing \isaindex{let}-expressions almost invariably
53.11 -requires the \isa{let}-con\-structs to be expanded at some point. Since
53.12 -\isa{let}-\isa{in} is just syntactic sugar for a predefined constant (called
53.13 -\isa{Let}), expanding \isa{let}-constructs means rewriting with
53.14 -\isa{Let_def}:
53.15 -
53.16 -{\makeatother\input{Misc/document/let_rewr.tex}}
53.17 +\input{Misc/document/let_rewr.tex}
53.18
53.19 \subsubsection{Conditional equations}
53.20
53.21 @@ -597,7 +588,7 @@
53.22 \emph{The right-hand side of an equation should (in some sense) be simpler
53.23 than the left-hand side.}
53.24 \end{quote}
53.25 -The heuristic is tricky to apply because it is not obvious that
53.26 +This heuristic is tricky to apply because it is not obvious that
53.27 \isa{rev xs \at\ ys} is simpler than \isa{itrev xs ys}. But see what
53.28 happens if you try to prove \isa{rev xs \at\ ys = itrev xs ys}!
53.29
53.30 @@ -635,9 +626,9 @@
53.31 How far can we push nested recursion? By the unfolding argument above, we can
53.32 reduce nested to mutual recursion provided the nested recursion only involves
53.33 previously defined datatypes. This does not include functions:
53.34 -\begin{ttbox}
53.35 -datatype t = C (t => bool)
53.36 -\end{ttbox}
53.37 +\begin{isabelle}
53.38 +\isacommand{datatype} t = C "t \isasymRightarrow\ bool"
53.39 +\end{isabelle}
53.40 is a real can of worms: in HOL it must be ruled out because it requires a type
53.41 \isa{t} such that \isa{t} and its power set \isa{t \isasymFun\ bool} have the
53.42 same cardinality---an impossibility. For the same reason it is not possible
53.43 @@ -654,14 +645,14 @@
53.44 \input{Datatype/document/Fundata.tex}
53.45 \bigskip
53.46
53.47 -If you need nested recursion on the left of a function arrow,
53.48 -there are alternatives to pure HOL: LCF~\cite{paulson87} is a logic where types like
53.49 -\begin{ttbox}
53.50 -datatype lam = C (lam -> lam)
53.51 -\end{ttbox}
53.52 -do indeed make sense (note the intentionally different arrow \isa{->}).
53.53 -There is even a version of LCF on top of HOL, called
53.54 -HOLCF~\cite{MuellerNvOS99}.
53.55 +If you need nested recursion on the left of a function arrow, there are
53.56 +alternatives to pure HOL: LCF~\cite{paulson87} is a logic where types like
53.57 +\begin{isabelle}
53.58 +\isacommand{datatype} lam = C "lam \isasymrightarrow\ lam"
53.59 +\end{isabelle}
53.60 +do indeed make sense (but note the intentionally different arrow
53.61 +\isa{\isasymrightarrow}). There is even a version of LCF on top of HOL,
53.62 +called HOLCF~\cite{MuellerNvOS99}.
53.63
53.64 \index{*primrec|)}
53.65 \index{*datatype|)}