1.1 --- a/doc-src/TutorialI/Advanced/advanced.tex Wed Oct 11 00:03:22 2000 +0200
1.2 +++ b/doc-src/TutorialI/Advanced/advanced.tex Wed Oct 11 09:09:06 2000 +0200
1.3 @@ -12,13 +12,19 @@
1.4 \input{Advanced/document/simp.tex}
1.5
1.6 \section{Advanced forms of recursion}
1.7 -\label{sec:advanced-recdef}
1.8 \index{*recdef|(}
1.9 +
1.10 +\subsection{Recursion over nested datatypes}
1.11 +\label{sec:nested-recdef}
1.12 \input{Recdef/document/Nested0.tex}
1.13 \input{Recdef/document/Nested1.tex}
1.14 \input{Recdef/document/Nested2.tex}
1.15 \index{*recdef|)}
1.16
1.17 +\subsection{Beyond measure}
1.18 +\label{sec:wellfounded}
1.19 +\input{Recdef/document/WFrec.tex}
1.20 +
1.21 \section{Advanced induction techniques}
1.22 \label{sec:advanced-ind}
1.23 \index{induction|(}
2.1 --- a/doc-src/TutorialI/Advanced/document/simp.tex Wed Oct 11 00:03:22 2000 +0200
2.2 +++ b/doc-src/TutorialI/Advanced/document/simp.tex Wed Oct 11 09:09:06 2000 +0200
2.3 @@ -126,16 +126,71 @@
2.4 \label{sec:SimpHow}
2.5 Roughly speaking, the simplifier proceeds bottom-up (subterms are simplified
2.6 first) and a conditional equation is only applied if its condition could be
2.7 -proved (again by simplification). Below we explain some special%
2.8 +proved (again by simplification). Below we explain some special features of the rewriting process.%
2.9 \end{isamarkuptext}%
2.10 %
2.11 \isamarkupsubsubsection{Higher-order patterns}
2.12 %
2.13 -\isamarkupsubsubsection{Local assumptions}
2.14 +\begin{isamarkuptext}%
2.15 +\index{simplification rule|(}
2.16 +So far we have pretended the simplifier can deal with arbitrary
2.17 +rewrite rules. This is not quite true. Due to efficiency (and
2.18 +potentially also computability) reasons, the simplifier expects the
2.19 +left-hand side of each rule to be a so-called \emph{higher-order
2.20 +pattern}~\cite{nipkow-patterns}\indexbold{higher-order
2.21 +pattern}\indexbold{pattern, higher-order}. This restricts where
2.22 +unknowns may occur. Higher-order patterns are terms in $\beta$-normal
2.23 +form (this will always be the case unless you have done something
2.24 +strange) where each occurrence of an unknown is of the form
2.25 +$\Var{f}~x@1~\dots~x@n$, where the $x@i$ are distinct bound
2.26 +variables. Thus all ``standard'' rewrite rules, where all unknowns are
2.27 +of base type, for example \isa{{\isacharquery}m\ {\isacharplus}\ {\isacharquery}n\ {\isacharplus}\ {\isacharquery}k\ {\isacharequal}\ {\isacharquery}m\ {\isacharplus}\ {\isacharparenleft}{\isacharquery}n\ {\isacharplus}\ {\isacharquery}k{\isacharparenright}}, are OK: if an unknown is
2.28 +of base type, it cannot have any arguments. Additionally, the rule
2.29 +\isa{{\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}P\ x\ {\isasymand}\ {\isacharquery}Q\ x{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}{\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}P\ x{\isacharparenright}\ {\isasymand}\ {\isacharparenleft}{\isasymforall}x{\isachardot}\ {\isacharquery}Q\ x{\isacharparenright}{\isacharparenright}} is also OK, in
2.30 +both directions: all arguments of the unknowns \isa{{\isacharquery}P} and
2.31 +\isa{{\isacharquery}Q} are distinct bound variables.
2.32 +
2.33 +If the left-hand side is not a higher-order pattern, not all is lost
2.34 +and the simplifier will still try to apply the rule, but only if it
2.35 +matches ``directly'', i.e.\ without much $\lambda$-calculus hocus
2.36 +pocus. For example, \isa{{\isacharquery}f\ {\isacharquery}x\ {\isasymin}\ range\ {\isacharquery}f\ {\isacharequal}\ True} rewrites
2.37 +\isa{g\ a\ {\isasymin}\ range\ g} to \isa{True}, but will fail to match
2.38 +\isa{g{\isacharparenleft}h\ b{\isacharparenright}\ {\isasymin}\ range{\isacharparenleft}{\isasymlambda}x{\isachardot}\ g{\isacharparenleft}h\ x{\isacharparenright}{\isacharparenright}}. However, you can
2.39 +replace the offending subterms (in our case \isa{{\isacharquery}f\ {\isacharquery}x}, which
2.40 +is not a pattern) by adding new variables and conditions: \isa{{\isacharquery}y\ {\isacharequal}\ {\isacharquery}f\ {\isacharquery}x\ {\isasymLongrightarrow}\ {\isacharquery}y\ {\isasymin}\ range\ {\isacharquery}f\ {\isacharequal}\ True} is fine
2.41 +as a conditional rewrite rule since conditions can be arbitrary
2.42 +terms. However, this trick is not a panacea because the newly
2.43 +introduced conditions may be hard to prove, which has to take place
2.44 +before the rule can actually be applied.
2.45 +
2.46 +There is basically no restriction on the form of the right-hand
2.47 +sides. They may not contain extraneous term or type variables, though.%
2.48 +\end{isamarkuptext}%
2.49 %
2.50 \isamarkupsubsubsection{The preprocessor}
2.51 %
2.52 \begin{isamarkuptext}%
2.53 +When some theorem is declared a simplification rule, it need not be a
2.54 +conditional equation already. The simplifier will turn it into a set of
2.55 +conditional equations automatically. For example, given \isa{f\ x\ {\isacharequal}\ g\ x\ {\isasymand}\ h\ x\ {\isacharequal}\ k\ x} the simplifier will turn this into the two separate
2.56 +simplifiction rules \isa{f\ x\ {\isacharequal}\ g\ x} and \isa{h\ x\ {\isacharequal}\ k\ x}. In
2.57 +general, the input theorem is converted as follows:
2.58 +\begin{eqnarray}
2.59 +\neg P &\mapsto& P = False \nonumber\\
2.60 +P \longrightarrow Q &\mapsto& P \Longrightarrow Q \nonumber\\
2.61 +P \land Q &\mapsto& P,\ Q \nonumber\\
2.62 +\forall x.~P~x &\mapsto& P~\Var{x}\nonumber\\
2.63 +\forall x \in A.\ P~x &\mapsto& \Var{x} \in A \Longrightarrow P~\Var{x} \nonumber\\
2.64 +\isa{if}\ P\ \isa{then}\ Q\ \isa{else}\ R &\mapsto&
2.65 + P \Longrightarrow Q,\ \neg P \Longrightarrow R \nonumber
2.66 +\end{eqnarray}
2.67 +Once this conversion process is finished, all remaining non-equations
2.68 +$P$ are turned into trivial equations $P = True$.
2.69 +For example, the formula \isa{{\isacharparenleft}p\ {\isasymlongrightarrow}\ q\ {\isasymand}\ r{\isacharparenright}\ {\isasymand}\ s} is converted into the three rules
2.70 +\begin{center}
2.71 +\isa{p\ {\isasymLongrightarrow}\ q\ {\isacharequal}\ True},\quad \isa{p\ {\isasymLongrightarrow}\ r\ {\isacharequal}\ True},\quad \isa{s\ {\isacharequal}\ True}.
2.72 +\end{center}
2.73 +\index{simplification rule|)}
2.74 \index{simplification|)}%
2.75 \end{isamarkuptext}%
2.76 \end{isabellebody}%
3.1 --- a/doc-src/TutorialI/Advanced/simp.thy Wed Oct 11 00:03:22 2000 +0200
3.2 +++ b/doc-src/TutorialI/Advanced/simp.thy Wed Oct 11 09:09:06 2000 +0200
3.3 @@ -114,16 +114,72 @@
3.4 text{*\label{sec:SimpHow}
3.5 Roughly speaking, the simplifier proceeds bottom-up (subterms are simplified
3.6 first) and a conditional equation is only applied if its condition could be
3.7 -proved (again by simplification). Below we explain some special
3.8 +proved (again by simplification). Below we explain some special features of the rewriting process.
3.9 *}
3.10
3.11 subsubsection{*Higher-order patterns*}
3.12
3.13 -subsubsection{*Local assumptions*}
3.14 +text{*\index{simplification rule|(}
3.15 +So far we have pretended the simplifier can deal with arbitrary
3.16 +rewrite rules. This is not quite true. Due to efficiency (and
3.17 +potentially also computability) reasons, the simplifier expects the
3.18 +left-hand side of each rule to be a so-called \emph{higher-order
3.19 +pattern}~\cite{nipkow-patterns}\indexbold{higher-order
3.20 +pattern}\indexbold{pattern, higher-order}. This restricts where
3.21 +unknowns may occur. Higher-order patterns are terms in $\beta$-normal
3.22 +form (this will always be the case unless you have done something
3.23 +strange) where each occurrence of an unknown is of the form
3.24 +$\Var{f}~x@1~\dots~x@n$, where the $x@i$ are distinct bound
3.25 +variables. Thus all ``standard'' rewrite rules, where all unknowns are
3.26 +of base type, for example @{thm add_assoc}, are OK: if an unknown is
3.27 +of base type, it cannot have any arguments. Additionally, the rule
3.28 +@{text"(\<forall>x. ?P x \<and> ?Q x) = ((\<forall>x. ?P x) \<and> (\<forall>x. ?Q x))"} is also OK, in
3.29 +both directions: all arguments of the unknowns @{text"?P"} and
3.30 +@{text"?Q"} are distinct bound variables.
3.31 +
3.32 +If the left-hand side is not a higher-order pattern, not all is lost
3.33 +and the simplifier will still try to apply the rule, but only if it
3.34 +matches ``directly'', i.e.\ without much $\lambda$-calculus hocus
3.35 +pocus. For example, @{text"?f ?x \<in> range ?f = True"} rewrites
3.36 +@{term"g a \<in> range g"} to @{term True}, but will fail to match
3.37 +@{text"g(h b) \<in> range(\<lambda>x. g(h x))"}. However, you can
3.38 +replace the offending subterms (in our case @{text"?f ?x"}, which
3.39 +is not a pattern) by adding new variables and conditions: @{text"?y =
3.40 +?f ?x \<Longrightarrow> ?y \<in> range ?f = True"} is fine
3.41 +as a conditional rewrite rule since conditions can be arbitrary
3.42 +terms. However, this trick is not a panacea because the newly
3.43 +introduced conditions may be hard to prove, which has to take place
3.44 +before the rule can actually be applied.
3.45 +
3.46 +There is basically no restriction on the form of the right-hand
3.47 +sides. They may not contain extraneous term or type variables, though.
3.48 +*}
3.49
3.50 subsubsection{*The preprocessor*}
3.51
3.52 text{*
3.53 +When some theorem is declared a simplification rule, it need not be a
3.54 +conditional equation already. The simplifier will turn it into a set of
3.55 +conditional equations automatically. For example, given @{prop"f x =
3.56 +g x & h x = k x"} the simplifier will turn this into the two separate
3.57 +simplifiction rules @{prop"f x = g x"} and @{prop"h x = k x"}. In
3.58 +general, the input theorem is converted as follows:
3.59 +\begin{eqnarray}
3.60 +\neg P &\mapsto& P = False \nonumber\\
3.61 +P \longrightarrow Q &\mapsto& P \Longrightarrow Q \nonumber\\
3.62 +P \land Q &\mapsto& P,\ Q \nonumber\\
3.63 +\forall x.~P~x &\mapsto& P~\Var{x}\nonumber\\
3.64 +\forall x \in A.\ P~x &\mapsto& \Var{x} \in A \Longrightarrow P~\Var{x} \nonumber\\
3.65 +@{text if}\ P\ @{text then}\ Q\ @{text else}\ R &\mapsto&
3.66 + P \Longrightarrow Q,\ \neg P \Longrightarrow R \nonumber
3.67 +\end{eqnarray}
3.68 +Once this conversion process is finished, all remaining non-equations
3.69 +$P$ are turned into trivial equations $P = True$.
3.70 +For example, the formula @{prop"(p \<longrightarrow> q \<and> r) \<and> s"} is converted into the three rules
3.71 +\begin{center}
3.72 +@{prop"p \<Longrightarrow> q = True"},\quad @{prop"p \<Longrightarrow> r = True"},\quad @{prop"s = True"}.
3.73 +\end{center}
3.74 +\index{simplification rule|)}
3.75 \index{simplification|)}
3.76 *}
3.77 (*<*)
4.1 --- a/doc-src/TutorialI/CTL/Base.thy Wed Oct 11 00:03:22 2000 +0200
4.2 +++ b/doc-src/TutorialI/CTL/Base.thy Wed Oct 11 09:09:06 2000 +0200
4.3 @@ -1,12 +1,12 @@
4.4 (*<*)theory Base = Main:(*>*)
4.5
4.6 -section{*A verified model checker*}
4.7 +section{*Case study: Verified model checkers*}
4.8
4.9 text{*
4.10 Model checking is a very popular technique for the verification of finite
4.11 state systems (implementations) w.r.t.\ temporal logic formulae
4.12 -(specifications) \cite{Clark}. Its foundations are completely set theoretic
4.13 -and this section shall develop them in HOL. This is done in two steps: first
4.14 +(specifications) \cite{Clarke,Huth-Ryan-book}. Its foundations are completely set theoretic
4.15 +and this section shall explore them a little in HOL. This is done in two steps: first
4.16 we consider a simple modal logic called propositional dynamic
4.17 logic (PDL) which we then extend to the temporal logic CTL used in many real
4.18 model checkers. In each case we give both a traditional semantics (@{text \<Turnstile>}) and a
4.19 @@ -45,8 +45,9 @@
4.20 Each state has a unique name or number ($s_0,s_1,s_2$), and in each
4.21 state certain \emph{atomic propositions} ($p,q,r$) are true.
4.22 The aim of temporal logic is to formalize statements such as ``there is no
4.23 -transition sequence starting from $s_2$ leading to a state where $p$ or $q$
4.24 -are true''.
4.25 +path starting from $s_2$ leading to a state where $p$ or $q$
4.26 +are true'', which is true, and ``on all paths starting from $s_0$ $q$ is always true'',
4.27 +which is false.
4.28
4.29 Abstracting from this concrete example, we assume there is some type of
4.30 states
5.1 --- a/doc-src/TutorialI/CTL/CTL.thy Wed Oct 11 00:03:22 2000 +0200
5.2 +++ b/doc-src/TutorialI/CTL/CTL.thy Wed Oct 11 09:09:06 2000 +0200
5.3 @@ -91,9 +91,9 @@
5.4 apply(rule subsetI);
5.5 apply(simp, clarify);
5.6 apply(erule converse_rtrancl_induct);
5.7 - apply(rule ssubst[OF lfp_Tarski[OF mono_ef]]);
5.8 + apply(rule ssubst[OF lfp_unfold[OF mono_ef]]);
5.9 apply(blast);
5.10 -apply(rule ssubst[OF lfp_Tarski[OF mono_ef]]);
5.11 +apply(rule ssubst[OF lfp_unfold[OF mono_ef]]);
5.12 by(blast);
5.13 (*>*)
5.14 text{*
5.15 @@ -174,7 +174,7 @@
5.16 lemma not_in_lfp_afD:
5.17 "s \<notin> lfp(af A) \<Longrightarrow> s \<notin> A \<and> (\<exists> t. (s,t)\<in>M \<and> t \<notin> lfp(af A))";
5.18 apply(erule swap);
5.19 -apply(rule ssubst[OF lfp_Tarski[OF mono_af]]);
5.20 +apply(rule ssubst[OF lfp_unfold[OF mono_af]]);
5.21 apply(simp add:af_def);
5.22 done;
5.23
5.24 @@ -380,19 +380,21 @@
5.25 @{text[display]"s \<Turnstile> EU f g = (\<exists>p\<in>Paths s. \<exists>j. p j \<Turnstile> g \<and> (\<exists>i < j. p i \<Turnstile> f))"}
5.26 and model checking algorithm
5.27 @{text[display]"mc(EU f g) = lfp(\<lambda>T. mc g \<union> mc f \<inter> (M^-1 ^^ T))"}
5.28 -Prove the equivalence between semantics and model checking, i.e.\
5.29 -@{prop"mc(EU f g) = {s. s \<Turnstile> EU f g}"}.
5.30 +Prove the equivalence between semantics and model checking, i.e.\ that
5.31 +@{prop[display]"mc(EU f g) = {s. s \<Turnstile> EU f g}"}
5.32 +%For readability you may want to annotate {term EU} with its customary syntax
5.33 +%{text[display]"| EU formula formula E[_ U _]"}
5.34 +%which enables you to read and write {text"E[f U g]"} instead of {term"EU f g"}.
5.35 +\end{exercise}
5.36 +For more CTL exercises see, for example \cite{Huth-Ryan-book,Clarke-as-well?}.
5.37 +\bigskip
5.38
5.39 -For readability you may want to equip @{term EU} with the following customary syntax:
5.40 -@{text"E[f U g]"}.
5.41 -\end{exercise}
5.42 -
5.43 -Let us close this section with a few words about the executability of @{term mc}.
5.44 +Let us close this section with a few words about the executability of our model checkers.
5.45 It is clear that if all sets are finite, they can be represented as lists and the usual
5.46 set operations are easily implemented. Only @{term lfp} requires a little thought.
5.47 Fortunately the HOL library proves that in the case of finite sets and a monotone @{term F},
5.48 @{term"lfp F"} can be computed by iterated application of @{term F} to @{term"{}"} until
5.49 -a fixpoint is reached. It is possible to generate executable functional programs
5.50 +a fixpoint is reached. It is actually possible to generate executable functional programs
5.51 from HOL definitions, but that is beyond the scope of the tutorial.
5.52 *}
5.53
5.54 @@ -428,7 +430,7 @@
5.55 apply(subgoal_tac "wf{(y,x). (x,y)\<in>M \<and> x \<in> Avoid s A \<and> y \<in> Avoid s A \<and> x \<notin> A}");
5.56 apply(erule_tac a = t in wf_induct);
5.57 apply(clarsimp);
5.58 - apply(rule ssubst [OF lfp_Tarski[OF mono_af]]);
5.59 + apply(rule ssubst [OF lfp_unfold[OF mono_af]]);
5.60 apply(unfold af_def);
5.61 apply(blast intro:Avoid.intros);
5.62 apply(erule contrapos2);
6.1 --- a/doc-src/TutorialI/CTL/PDL.thy Wed Oct 11 00:03:22 2000 +0200
6.2 +++ b/doc-src/TutorialI/CTL/PDL.thy Wed Oct 11 09:09:06 2000 +0200
6.3 @@ -158,7 +158,7 @@
6.4 is solved by unrolling @{term lfp} once
6.5 *}
6.6
6.7 - apply(rule ssubst[OF lfp_Tarski[OF mono_ef]])
6.8 + apply(rule ssubst[OF lfp_unfold[OF mono_ef]])
6.9 (*pr(latex xsymbols symbols);*)
6.10 txt{*
6.11 \begin{isabelle}
6.12 @@ -173,7 +173,7 @@
6.13 The proof of the induction step is identical to the one for the base case:
6.14 *}
6.15
6.16 -apply(rule ssubst[OF lfp_Tarski[OF mono_ef]])
6.17 +apply(rule ssubst[OF lfp_unfold[OF mono_ef]])
6.18 apply(blast)
6.19 done
6.20
6.21 @@ -213,7 +213,7 @@
6.22 lemma "(s \<Turnstile> EF f) = (s \<Turnstile> f | s \<Turnstile> Neg(AX(Neg(EF f))))";
6.23 apply(simp only:aux);
6.24 apply(simp);
6.25 -apply(rule ssubst[OF lfp_Tarski[OF mono_ef]], fast);
6.26 +apply(rule ssubst[OF lfp_unfold[OF mono_ef]], fast);
6.27 done
6.28
6.29 end
7.1 --- a/doc-src/TutorialI/CTL/document/Base.tex Wed Oct 11 00:03:22 2000 +0200
7.2 +++ b/doc-src/TutorialI/CTL/document/Base.tex Wed Oct 11 09:09:06 2000 +0200
7.3 @@ -2,13 +2,13 @@
7.4 \begin{isabellebody}%
7.5 \def\isabellecontext{Base}%
7.6 %
7.7 -\isamarkupsection{A verified model checker}
7.8 +\isamarkupsection{Case study: Verified model checkers}
7.9 %
7.10 \begin{isamarkuptext}%
7.11 Model checking is a very popular technique for the verification of finite
7.12 state systems (implementations) w.r.t.\ temporal logic formulae
7.13 -(specifications) \cite{Clark}. Its foundations are completely set theoretic
7.14 -and this section shall develop them in HOL. This is done in two steps: first
7.15 +(specifications) \cite{Clarke,Huth-Ryan-book}. Its foundations are completely set theoretic
7.16 +and this section shall explore them a little in HOL. This is done in two steps: first
7.17 we consider a simple modal logic called propositional dynamic
7.18 logic (PDL) which we then extend to the temporal logic CTL used in many real
7.19 model checkers. In each case we give both a traditional semantics (\isa{{\isasymTurnstile}}) and a
7.20 @@ -47,8 +47,9 @@
7.21 Each state has a unique name or number ($s_0,s_1,s_2$), and in each
7.22 state certain \emph{atomic propositions} ($p,q,r$) are true.
7.23 The aim of temporal logic is to formalize statements such as ``there is no
7.24 -transition sequence starting from $s_2$ leading to a state where $p$ or $q$
7.25 -are true''.
7.26 +path starting from $s_2$ leading to a state where $p$ or $q$
7.27 +are true'', which is true, and ``on all paths starting from $s_0$ $q$ is always true'',
7.28 +which is false.
7.29
7.30 Abstracting from this concrete example, we assume there is some type of
7.31 states%
8.1 --- a/doc-src/TutorialI/CTL/document/CTL.tex Wed Oct 11 00:03:22 2000 +0200
8.2 +++ b/doc-src/TutorialI/CTL/document/CTL.tex Wed Oct 11 09:09:06 2000 +0200
8.3 @@ -114,7 +114,7 @@
8.4 \isacommand{lemma}\ not{\isacharunderscore}in{\isacharunderscore}lfp{\isacharunderscore}afD{\isacharcolon}\isanewline
8.5 \ {\isachardoublequote}s\ {\isasymnotin}\ lfp{\isacharparenleft}af\ A{\isacharparenright}\ {\isasymLongrightarrow}\ s\ {\isasymnotin}\ A\ {\isasymand}\ {\isacharparenleft}{\isasymexists}\ t{\isachardot}\ {\isacharparenleft}s{\isacharcomma}t{\isacharparenright}{\isasymin}M\ {\isasymand}\ t\ {\isasymnotin}\ lfp{\isacharparenleft}af\ A{\isacharparenright}{\isacharparenright}{\isachardoublequote}\isanewline
8.6 \isacommand{apply}{\isacharparenleft}erule\ swap{\isacharparenright}\isanewline
8.7 -\isacommand{apply}{\isacharparenleft}rule\ ssubst{\isacharbrackleft}OF\ lfp{\isacharunderscore}Tarski{\isacharbrackleft}OF\ mono{\isacharunderscore}af{\isacharbrackright}{\isacharbrackright}{\isacharparenright}\isanewline
8.8 +\isacommand{apply}{\isacharparenleft}rule\ ssubst{\isacharbrackleft}OF\ lfp{\isacharunderscore}unfold{\isacharbrackleft}OF\ mono{\isacharunderscore}af{\isacharbrackright}{\isacharbrackright}{\isacharparenright}\isanewline
8.9 \isacommand{apply}{\isacharparenleft}simp\ add{\isacharcolon}af{\isacharunderscore}def{\isacharparenright}\isanewline
8.10 \isacommand{done}%
8.11 \begin{isamarkuptext}%
8.12 @@ -283,19 +283,23 @@
8.13 \begin{isabelle}%
8.14 \ \ \ \ \ mc{\isacharparenleft}EU\ f\ g{\isacharparenright}\ {\isacharequal}\ lfp{\isacharparenleft}{\isasymlambda}T{\isachardot}\ mc\ g\ {\isasymunion}\ mc\ f\ {\isasyminter}\ {\isacharparenleft}M{\isacharcircum}{\isacharminus}\isadigit{1}\ {\isacharcircum}{\isacharcircum}\ T{\isacharparenright}{\isacharparenright}%
8.15 \end{isabelle}
8.16 -Prove the equivalence between semantics and model checking, i.e.\
8.17 -\isa{mc\ {\isacharparenleft}EU\ f\ g{\isacharparenright}\ {\isacharequal}\ {\isacharbraceleft}s{\isachardot}\ s\ {\isasymTurnstile}\ EU\ f\ g{\isacharbraceright}}.
8.18 +Prove the equivalence between semantics and model checking, i.e.\ that
8.19 +\begin{isabelle}%
8.20 +\ \ \ \ \ mc\ {\isacharparenleft}EU\ f\ g{\isacharparenright}\ {\isacharequal}\ {\isacharbraceleft}s{\isachardot}\ s\ {\isasymTurnstile}\ EU\ f\ g{\isacharbraceright}%
8.21 +\end{isabelle}
8.22 +%For readability you may want to annotate {term EU} with its customary syntax
8.23 +%{text[display]"| EU formula formula E[_ U _]"}
8.24 +%which enables you to read and write {text"E[f U g]"} instead of {term"EU f g"}.
8.25 +\end{exercise}
8.26 +For more CTL exercises see, for example \cite{Huth-Ryan-book,Clarke-as-well?}.
8.27 +\bigskip
8.28
8.29 -For readability you may want to equip \isa{EU} with the following customary syntax:
8.30 -\isa{E{\isacharbrackleft}f\ U\ g{\isacharbrackright}}.
8.31 -\end{exercise}
8.32 -
8.33 -Let us close this section with a few words about the executability of \isa{mc}.
8.34 +Let us close this section with a few words about the executability of our model checkers.
8.35 It is clear that if all sets are finite, they can be represented as lists and the usual
8.36 set operations are easily implemented. Only \isa{lfp} requires a little thought.
8.37 Fortunately the HOL library proves that in the case of finite sets and a monotone \isa{F},
8.38 \isa{lfp\ F} can be computed by iterated application of \isa{F} to \isa{{\isacharbraceleft}{\isacharbraceright}} until
8.39 -a fixpoint is reached. It is possible to generate executable functional programs
8.40 +a fixpoint is reached. It is actually possible to generate executable functional programs
8.41 from HOL definitions, but that is beyond the scope of the tutorial.%
8.42 \end{isamarkuptext}%
8.43 \end{isabellebody}%
9.1 --- a/doc-src/TutorialI/CTL/document/PDL.tex Wed Oct 11 00:03:22 2000 +0200
9.2 +++ b/doc-src/TutorialI/CTL/document/PDL.tex Wed Oct 11 09:09:06 2000 +0200
9.3 @@ -152,7 +152,7 @@
9.4 \end{isabelle}
9.5 is solved by unrolling \isa{lfp} once%
9.6 \end{isamarkuptxt}%
9.7 -\ \isacommand{apply}{\isacharparenleft}rule\ ssubst{\isacharbrackleft}OF\ lfp{\isacharunderscore}Tarski{\isacharbrackleft}OF\ mono{\isacharunderscore}ef{\isacharbrackright}{\isacharbrackright}{\isacharparenright}%
9.8 +\ \isacommand{apply}{\isacharparenleft}rule\ ssubst{\isacharbrackleft}OF\ lfp{\isacharunderscore}unfold{\isacharbrackleft}OF\ mono{\isacharunderscore}ef{\isacharbrackright}{\isacharbrackright}{\isacharparenright}%
9.9 \begin{isamarkuptxt}%
9.10 \begin{isabelle}
9.11 \ \isadigit{1}{\isachardot}\ {\isasymAnd}t{\isachardot}\ t\ {\isasymin}\ A\ {\isasymLongrightarrow}\ t\ {\isasymin}\ A\ {\isasymunion}\ M{\isacharcircum}{\isacharminus}\isadigit{1}\ {\isacharcircum}{\isacharcircum}\ lfp\ {\isacharparenleft}{\isasymlambda}T{\isachardot}\ A\ {\isasymunion}\ M{\isacharcircum}{\isacharminus}\isadigit{1}\ {\isacharcircum}{\isacharcircum}\ T{\isacharparenright}
9.12 @@ -164,7 +164,7 @@
9.13 \noindent
9.14 The proof of the induction step is identical to the one for the base case:%
9.15 \end{isamarkuptxt}%
9.16 -\isacommand{apply}{\isacharparenleft}rule\ ssubst{\isacharbrackleft}OF\ lfp{\isacharunderscore}Tarski{\isacharbrackleft}OF\ mono{\isacharunderscore}ef{\isacharbrackright}{\isacharbrackright}{\isacharparenright}\isanewline
9.17 +\isacommand{apply}{\isacharparenleft}rule\ ssubst{\isacharbrackleft}OF\ lfp{\isacharunderscore}unfold{\isacharbrackleft}OF\ mono{\isacharunderscore}ef{\isacharbrackright}{\isacharbrackright}{\isacharparenright}\isanewline
9.18 \isacommand{apply}{\isacharparenleft}blast{\isacharparenright}\isanewline
9.19 \isacommand{done}%
9.20 \begin{isamarkuptext}%
10.1 --- a/doc-src/TutorialI/Datatype/Nested.thy Wed Oct 11 00:03:22 2000 +0200
10.2 +++ b/doc-src/TutorialI/Datatype/Nested.thy Wed Oct 11 09:09:06 2000 +0200
10.3 @@ -118,7 +118,7 @@
10.4 could derive a new induction principle as well (see
10.5 \S\ref{sec:derive-ind}), but turns out to be simpler to define
10.6 functions by \isacommand{recdef} instead of \isacommand{primrec}.
10.7 -The details are explained in \S\ref{sec:advanced-recdef} below.
10.8 +The details are explained in \S\ref{sec:nested-recdef} below.
10.9
10.10 Of course, you may also combine mutual and nested recursion. For example,
10.11 constructor @{text Sum} in \S\ref{sec:datatype-mut-rec} could take a list of
11.1 --- a/doc-src/TutorialI/Datatype/document/Nested.tex Wed Oct 11 00:03:22 2000 +0200
11.2 +++ b/doc-src/TutorialI/Datatype/document/Nested.tex Wed Oct 11 09:09:06 2000 +0200
11.3 @@ -116,7 +116,7 @@
11.4 could derive a new induction principle as well (see
11.5 \S\ref{sec:derive-ind}), but turns out to be simpler to define
11.6 functions by \isacommand{recdef} instead of \isacommand{primrec}.
11.7 -The details are explained in \S\ref{sec:advanced-recdef} below.
11.8 +The details are explained in \S\ref{sec:nested-recdef} below.
11.9
11.10 Of course, you may also combine mutual and nested recursion. For example,
11.11 constructor \isa{Sum} in \S\ref{sec:datatype-mut-rec} could take a list of
12.1 --- a/doc-src/TutorialI/IsaMakefile Wed Oct 11 00:03:22 2000 +0200
12.2 +++ b/doc-src/TutorialI/IsaMakefile Wed Oct 11 09:09:06 2000 +0200
12.3 @@ -87,9 +87,9 @@
12.4
12.5 HOL-Recdef: HOL $(LOG)/HOL-Recdef.gz
12.6
12.7 -$(LOG)/HOL-Recdef.gz: $(OUT)/HOL Recdef/examples.thy Recdef/termination.thy \
12.8 +$(LOG)/HOL-Recdef.gz: $(OUT)/HOL Recdef/ROOT.ML Recdef/examples.thy Recdef/termination.thy \
12.9 Recdef/simplification.thy Recdef/Induction.thy \
12.10 - Recdef/Nested0.thy Recdef/Nested1.thy Recdef/Nested2.thy
12.11 + Recdef/Nested0.thy Recdef/Nested1.thy Recdef/Nested2.thy Recdef/WFrec.thy
12.12 @$(ISATOOL) usedir -i true -d dvi -D document $(OUT)/HOL Recdef
12.13 @rm -f tutorial.dvi
12.14
13.1 --- a/doc-src/TutorialI/Misc/AdvancedInd.thy Wed Oct 11 00:03:22 2000 +0200
13.2 +++ b/doc-src/TutorialI/Misc/AdvancedInd.thy Wed Oct 11 09:09:06 2000 +0200
13.3 @@ -275,13 +275,18 @@
13.4 by(insert induct_lem, blast);
13.5
13.6 text{*
13.7 +
13.8 Finally we should mention that HOL already provides the mother of all
13.9 -inductions, \emph{wellfounded induction} (@{thm[source]wf_induct}):
13.10 -@{thm[display]"wf_induct"[no_vars]}
13.11 -where @{term"wf r"} means that the relation @{term"r"} is wellfounded.
13.12 +inductions, \textbf{wellfounded
13.13 +induction}\indexbold{induction!wellfounded}\index{wellfounded
13.14 +induction|see{induction, wellfounded}} (@{thm[source]wf_induct}):
13.15 +@{thm[display]wf_induct[no_vars]}
13.16 +where @{term"wf r"} means that the relation @{term r} is wellfounded
13.17 +(see \S\ref{sec:wellfounded}).
13.18 For example, theorem @{thm[source]nat_less_induct} can be viewed (and
13.19 derived) as a special case of @{thm[source]wf_induct} where
13.20 -@{term"r"} is @{text"<"} on @{typ"nat"}. For details see the library.
13.21 +@{term r} is @{text"<"} on @{typ nat}. The details can be found in the HOL library.
13.22 +For a mathematical account of wellfounded induction see, for example, \cite{Baader-Nipkow}.
13.23 *};
13.24
13.25 (*<*)
14.1 --- a/doc-src/TutorialI/Misc/document/AdvancedInd.tex Wed Oct 11 00:03:22 2000 +0200
14.2 +++ b/doc-src/TutorialI/Misc/document/AdvancedInd.tex Wed Oct 11 09:09:06 2000 +0200
14.3 @@ -251,14 +251,18 @@
14.4 \isacommand{by}{\isacharparenleft}insert\ induct{\isacharunderscore}lem{\isacharcomma}\ blast{\isacharparenright}%
14.5 \begin{isamarkuptext}%
14.6 Finally we should mention that HOL already provides the mother of all
14.7 -inductions, \emph{wellfounded induction} (\isa{wf{\isacharunderscore}induct}):
14.8 +inductions, \textbf{wellfounded
14.9 +induction}\indexbold{induction!wellfounded}\index{wellfounded
14.10 +induction|see{induction, wellfounded}} (\isa{wf{\isacharunderscore}induct}):
14.11 \begin{isabelle}%
14.12 \ \ \ \ \ {\isasymlbrakk}wf\ r{\isacharsemicolon}\ {\isasymAnd}x{\isachardot}\ {\isasymforall}y{\isachardot}\ {\isacharparenleft}y{\isacharcomma}\ x{\isacharparenright}\ {\isasymin}\ r\ {\isasymlongrightarrow}\ P\ y\ {\isasymLongrightarrow}\ P\ x{\isasymrbrakk}\ {\isasymLongrightarrow}\ P\ a%
14.13 \end{isabelle}
14.14 -where \isa{wf\ r} means that the relation \isa{r} is wellfounded.
14.15 +where \isa{wf\ r} means that the relation \isa{r} is wellfounded
14.16 +(see \S\ref{sec:wellfounded}).
14.17 For example, theorem \isa{nat{\isacharunderscore}less{\isacharunderscore}induct} can be viewed (and
14.18 derived) as a special case of \isa{wf{\isacharunderscore}induct} where
14.19 -\isa{r} is \isa{{\isacharless}} on \isa{nat}. For details see the library.%
14.20 +\isa{r} is \isa{{\isacharless}} on \isa{nat}. The details can be found in the HOL library.
14.21 +For a mathematical account of wellfounded induction see, for example, \cite{Baader-Nipkow}.%
14.22 \end{isamarkuptext}%
14.23 \end{isabellebody}%
14.24 %%% Local Variables:
15.1 --- a/doc-src/TutorialI/Recdef/Nested0.thy Wed Oct 11 00:03:22 2000 +0200
15.2 +++ b/doc-src/TutorialI/Recdef/Nested0.thy Wed Oct 11 09:09:06 2000 +0200
15.3 @@ -17,7 +17,6 @@
15.4 definitions and proofs about nested recursive datatypes. As an example we
15.5 choose exercise~\ref{ex:trev-trev}:
15.6 *}
15.7 -(* consts trev :: "('a,'b)term => ('a,'b)term" *)
15.8 -(*<*)
15.9 -end
15.10 -(*>*)
15.11 +
15.12 +consts trev :: "('a,'b)term \<Rightarrow> ('a,'b)term"
15.13 +(*<*)end(*>*)
16.1 --- a/doc-src/TutorialI/Recdef/Nested1.thy Wed Oct 11 00:03:22 2000 +0200
16.2 +++ b/doc-src/TutorialI/Recdef/Nested1.thy Wed Oct 11 09:09:06 2000 +0200
16.3 @@ -1,7 +1,6 @@
16.4 (*<*)
16.5 theory Nested1 = Nested0:;
16.6 (*>*)
16.7 -consts trev :: "('a,'b)term \<Rightarrow> ('a,'b)term";
16.8
16.9 text{*\noindent
16.10 Although the definition of @{term trev} is quite natural, we will have
17.1 --- a/doc-src/TutorialI/Recdef/Nested2.thy Wed Oct 11 00:03:22 2000 +0200
17.2 +++ b/doc-src/TutorialI/Recdef/Nested2.thy Wed Oct 11 09:09:06 2000 +0200
17.3 @@ -1,6 +1,5 @@
17.4 (*<*)
17.5 theory Nested2 = Nested0:;
17.6 -consts trev :: "('a,'b)term => ('a,'b)term";
17.7 (*>*)
17.8
17.9 text{*\noindent
18.1 --- a/doc-src/TutorialI/Recdef/ROOT.ML Wed Oct 11 00:03:22 2000 +0200
18.2 +++ b/doc-src/TutorialI/Recdef/ROOT.ML Wed Oct 11 09:09:06 2000 +0200
18.3 @@ -3,3 +3,4 @@
18.4 use_thy "Induction";
18.5 use_thy "Nested1";
18.6 use_thy "Nested2";
18.7 +use_thy "WFrec";
19.1 --- a/doc-src/TutorialI/Recdef/document/Nested0.tex Wed Oct 11 00:03:22 2000 +0200
19.2 +++ b/doc-src/TutorialI/Recdef/document/Nested0.tex Wed Oct 11 09:09:06 2000 +0200
19.3 @@ -16,7 +16,7 @@
19.4 definitions and proofs about nested recursive datatypes. As an example we
19.5 choose exercise~\ref{ex:trev-trev}:%
19.6 \end{isamarkuptext}%
19.7 -\end{isabellebody}%
19.8 +\isacommand{consts}\ trev\ \ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}b{\isacharparenright}term\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}b{\isacharparenright}term{\isachardoublequote}\end{isabellebody}%
19.9 %%% Local Variables:
19.10 %%% mode: latex
19.11 %%% TeX-master: "root"
20.1 --- a/doc-src/TutorialI/Recdef/document/Nested1.tex Wed Oct 11 00:03:22 2000 +0200
20.2 +++ b/doc-src/TutorialI/Recdef/document/Nested1.tex Wed Oct 11 09:09:06 2000 +0200
20.3 @@ -1,7 +1,7 @@
20.4 %
20.5 \begin{isabellebody}%
20.6 \def\isabellecontext{Nested1}%
20.7 -\isacommand{consts}\ trev\ \ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}b{\isacharparenright}term\ {\isasymRightarrow}\ {\isacharparenleft}{\isacharprime}a{\isacharcomma}{\isacharprime}b{\isacharparenright}term{\isachardoublequote}%
20.8 +%
20.9 \begin{isamarkuptext}%
20.10 \noindent
20.11 Although the definition of \isa{trev} is quite natural, we will have
21.1 --- a/doc-src/TutorialI/Recdef/document/termination.tex Wed Oct 11 00:03:22 2000 +0200
21.2 +++ b/doc-src/TutorialI/Recdef/document/termination.tex Wed Oct 11 09:09:06 2000 +0200
21.3 @@ -75,21 +75,7 @@
21.4 it later on and then use it to remove the preconditions from the theorems
21.5 about \isa{f}. In most cases this is more cumbersome than proving things
21.6 up front.
21.7 -%FIXME, with one exception: nested recursion.
21.8 -
21.9 -Although all the above examples employ measure functions, \isacommand{recdef}
21.10 -allows arbitrary wellfounded relations. For example, termination of
21.11 -Ackermann's function requires the lexicographic product \isa{{\isacharless}{\isacharasterisk}lex{\isacharasterisk}{\isachargreater}}:%
21.12 -\end{isamarkuptext}%
21.13 -\isacommand{consts}\ ack\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}nat{\isasymtimes}nat\ {\isasymRightarrow}\ nat{\isachardoublequote}\isanewline
21.14 -\isacommand{recdef}\ ack\ {\isachardoublequote}measure{\isacharparenleft}{\isasymlambda}m{\isachardot}\ m{\isacharparenright}\ {\isacharless}{\isacharasterisk}lex{\isacharasterisk}{\isachargreater}\ measure{\isacharparenleft}{\isasymlambda}n{\isachardot}\ n{\isacharparenright}{\isachardoublequote}\isanewline
21.15 -\ \ {\isachardoublequote}ack{\isacharparenleft}\isadigit{0}{\isacharcomma}n{\isacharparenright}\ \ \ \ \ \ \ \ \ {\isacharequal}\ Suc\ n{\isachardoublequote}\isanewline
21.16 -\ \ {\isachardoublequote}ack{\isacharparenleft}Suc\ m{\isacharcomma}\isadigit{0}{\isacharparenright}\ \ \ \ \ {\isacharequal}\ ack{\isacharparenleft}m{\isacharcomma}\ \isadigit{1}{\isacharparenright}{\isachardoublequote}\isanewline
21.17 -\ \ {\isachardoublequote}ack{\isacharparenleft}Suc\ m{\isacharcomma}Suc\ n{\isacharparenright}\ {\isacharequal}\ ack{\isacharparenleft}m{\isacharcomma}ack{\isacharparenleft}Suc\ m{\isacharcomma}n{\isacharparenright}{\isacharparenright}{\isachardoublequote}%
21.18 -\begin{isamarkuptext}%
21.19 -\noindent
21.20 -For details see the manual~\cite{isabelle-HOL} and the examples in the
21.21 -library.%
21.22 +%FIXME, with one exception: nested recursion.%
21.23 \end{isamarkuptext}%
21.24 \end{isabellebody}%
21.25 %%% Local Variables:
22.1 --- a/doc-src/TutorialI/Recdef/termination.thy Wed Oct 11 00:03:22 2000 +0200
22.2 +++ b/doc-src/TutorialI/Recdef/termination.thy Wed Oct 11 09:09:06 2000 +0200
22.3 @@ -81,21 +81,6 @@
22.4 about @{term f}. In most cases this is more cumbersome than proving things
22.5 up front.
22.6 %FIXME, with one exception: nested recursion.
22.7 -
22.8 -Although all the above examples employ measure functions, \isacommand{recdef}
22.9 -allows arbitrary wellfounded relations. For example, termination of
22.10 -Ackermann's function requires the lexicographic product @{text"<*lex*>"}:
22.11 -*}
22.12 -
22.13 -consts ack :: "nat\<times>nat \<Rightarrow> nat";
22.14 -recdef ack "measure(\<lambda>m. m) <*lex*> measure(\<lambda>n. n)"
22.15 - "ack(0,n) = Suc n"
22.16 - "ack(Suc m,0) = ack(m, 1)"
22.17 - "ack(Suc m,Suc n) = ack(m,ack(Suc m,n))";
22.18 -
22.19 -text{*\noindent
22.20 -For details see the manual~\cite{isabelle-HOL} and the examples in the
22.21 -library.
22.22 *}
22.23
22.24 (*<*)
23.1 --- a/doc-src/TutorialI/todo.tobias Wed Oct 11 00:03:22 2000 +0200
23.2 +++ b/doc-src/TutorialI/todo.tobias Wed Oct 11 09:09:06 2000 +0200
23.3 @@ -27,15 +27,16 @@
23.4
23.5 defs with = and pattern matching
23.6
23.7 -Why can't I declare trev at the end of Recdef/Nested0 and define it in
23.8 -Recdef/Nested1??
23.9 -
23.10 use arith_tac in recdef to solve termination conditions?
23.11 -> new example in Recdef/termination
23.12
23.13 a tactic for replacing a specific occurrence:
23.14 apply(substitute [2] thm)
23.15
23.16 +it would be nice if @term could deal with ?-vars.
23.17 +then a number of (unchecked!) @texts could be converted to @terms.
23.18 +
23.19 +
23.20 Minor fixes in the tutorial
23.21 ===========================
23.22
23.23 @@ -45,8 +46,6 @@
23.24
23.25 Explain typographic conventions?
23.26
23.27 -how the simplifier works
23.28 -
23.29 an example of induction: !y. A --> B --> C ??
23.30
23.31 Advanced Ind expects rulify, mp and spec. How much really?
23.32 @@ -80,15 +79,18 @@
23.33 literature. In Recdef/termination.thy, at the end.
23.34 %FIXME, with one exception: nested recursion.
23.35
23.36 -Sets: rels and ^*
23.37 +Syntax section: syntax annotations nor just for consts but also for constdefs and datatype.
23.38 +
23.39 +Prove EU exercise in CTL.
23.40
23.41
23.42 Minor additions to the tutorial, unclear where
23.43 ==============================================
23.44
23.45 -insert: lcp?
23.46 +Tacticals: , ? +
23.47
23.48 -Tacticals: , ? +
23.49 +"typedecl" is used in CTL/Base, but where is it introduced?
23.50 +In More Types chapter? Rephrase the CTL bit accordingly.
23.51
23.52 print_simpset (-> print_simps?)
23.53 (Another subsection Useful theorems, methods, and commands?)
23.54 @@ -107,7 +109,6 @@
23.55
23.56 demonstrate x : set xs in Sets. Or Tricks chapter?
23.57
23.58 -Table with symbols \<forall> etc. Apendix?
23.59 Appendix with HOL keywords. Say something about other keywords.
23.60
23.61
23.62 @@ -123,8 +124,6 @@
23.63 Nested inductive datatypes: another example/exercise:
23.64 size(t) <= size(subst s t)?
23.65
23.66 -Add Until to CTL.
23.67 -
23.68 insertion sort: primrec, later recdef
23.69
23.70 OTree:
24.1 --- a/doc-src/manual.bib Wed Oct 11 00:03:22 2000 +0200
24.2 +++ b/doc-src/manual.bib Wed Oct 11 09:09:06 2000 +0200
24.3 @@ -109,6 +109,9 @@
24.4
24.5 %B
24.6
24.7 +@book{Baader-Nipkow,author={Franz Baader and Tobias Nipkow},
24.8 +title="Term Rewriting and All That",publisher=CUP,year=1998}
24.9 +
24.10 @incollection{basin91,
24.11 author = {David Basin and Matt Kaufmann},
24.12 title = {The {Boyer-Moore} Prover and {Nuprl}: An Experimental
24.13 @@ -231,7 +234,7 @@
24.14
24.15 @incollection{dybjer91,
24.16 author = {Peter Dybjer},
24.17 - title = {Inductive Sets and Families in {Martin-L\"of's} Type
24.18 + title = {Inductive Sets and Families in {Martin-L{\"o}f's} Type
24.19 Theory and Their Set-Theoretic Semantics},
24.20 crossref = {huet-plotkin91},
24.21 pages = {280-306}}
24.22 @@ -400,7 +403,7 @@
24.23 pages = {31-55}}
24.24
24.25 @inproceedings{huet88,
24.26 - author = {G\'erard Huet},
24.27 + author = {G{\'e}rard Huet},
24.28 title = {Induction Principles Formalized in the {Calculus of
24.29 Constructions}},
24.30 booktitle = {Programming of Future Generation Computers},
24.31 @@ -409,6 +412,12 @@
24.32 pages = {205-216},
24.33 publisher = {Elsevier}}
24.34
24.35 +@Book{Huth-Ryan-book,
24.36 + author = {Michael Huth and Mark Ryan},
24.37 + title = {Logic in Computer Science. Modelling and reasoning about systems},
24.38 + publisher = CUP,
24.39 + year = 2000}
24.40 +
24.41 @InProceedings{Harrison:1996:MizarHOL,
24.42 author = {J. Harrison},
24.43 title = {A {Mizar} Mode for {HOL}},
24.44 @@ -462,7 +471,7 @@
24.45 pages = {366-380}}
24.46
24.47 @book{martinlof84,
24.48 - author = {Per Martin-L\"of},
24.49 + author = {Per Martin-L{\"o}f},
24.50 title = {Intuitionistic type theory},
24.51 year = 1984,
24.52 publisher = {Bibliopolis}}
24.53 @@ -520,7 +529,7 @@
24.54
24.55 @article{MuellerNvOS99,
24.56 author=
24.57 -{Olaf M\"uller and Tobias Nipkow and Oheimb, David von and Oscar Slotosch},
24.58 +{Olaf M{\"u}ller and Tobias Nipkow and Oheimb, David von and Oscar Slotosch},
24.59 title={{HOLCF = HOL + LCF}},journal=JFP,year=1999}
24.60
24.61 @Manual{Muzalewski:Mizar,
24.62 @@ -552,7 +561,7 @@
24.63 author = {Wolfgang Naraschewski and Tobias Nipkow},
24.64 title = {Type Inference Verified: Algorithm {W} in {Isabelle/HOL}},
24.65 booktitle = {Types for Proofs and Programs: Intl. Workshop TYPES '96},
24.66 - editor = {E. Gim\'enez and C. Paulin-Mohring},
24.67 + editor = {E. Gim{\'e}nez and C. Paulin-Mohring},
24.68 publisher = Springer,
24.69 series = LNCS,
24.70 volume = 1512,
24.71 @@ -607,8 +616,8 @@
24.72 @manual{isabelle-HOL,
24.73 author = {Tobias Nipkow and Lawrence C. Paulson and Markus Wenzel},
24.74 title = {{Isabelle}'s Logics: {HOL}},
24.75 - institution = {Institut f\"ur Informatik, Technische Universi\"at
24.76 - M\"unchen and Computer Laboratory, University of Cambridge},
24.77 + institution = {Institut f{\"u}r Informatik, Technische Universi{\"a}t
24.78 + M{\"u}nchen and Computer Laboratory, University of Cambridge},
24.79 note = {\url{http://isabelle.in.tum.de/doc/logics-HOL.pdf}}}
24.80
24.81 @article{nipkow-prehofer,
24.82 @@ -630,8 +639,8 @@
24.83 year = 1993}
24.84
24.85 @book{nordstrom90,
24.86 - author = {Bengt {Nordstr\"om} and Kent Petersson and Jan Smith},
24.87 - title = {Programming in {Martin-L\"of}'s Type Theory. An
24.88 + author = {Bengt {Nordstr{\"o}m} and Kent Petersson and Jan Smith},
24.89 + title = {Programming in {Martin-L{\"o}f}'s Type Theory. An
24.90 Introduction},
24.91 publisher = {Oxford University Press},
24.92 year = 1990}
24.93 @@ -974,7 +983,7 @@
24.94 %V
24.95
24.96 @Unpublished{voelker94,
24.97 - author = {Norbert V\"olker},
24.98 + author = {Norbert V{\"o}lker},
24.99 title = {The Verification of a Timer Program using {Isabelle/HOL}},
24.100 url = {\url{ftp://ftp.fernuni-hagen.de/pub/fachb/et/dvt/projects/verification/timer.tar.gz}},
24.101 year = 1994,
24.102 @@ -1123,7 +1132,7 @@
24.103 publisher = {Springer}}
24.104
24.105 @book{types94,
24.106 - editor = {Peter Dybjer and Bengt Nordstr{\"om} and Jan Smith},
24.107 + editor = {Peter Dybjer and Bengt Nordstr{{\"o}m} and Jan Smith},
24.108 title = TYPES # {: International Workshop {TYPES '94}},
24.109 booktitle = TYPES # {: International Workshop {TYPES '94}},
24.110 year = 1995,
24.111 @@ -1131,14 +1140,14 @@
24.112 series = {LNCS 996}}
24.113
24.114 @book{huet-plotkin91,
24.115 - editor = {{G\'erard} Huet and Gordon Plotkin},
24.116 + editor = {{G{\'e}rard} Huet and Gordon Plotkin},
24.117 title = {Logical Frameworks},
24.118 booktitle = {Logical Frameworks},
24.119 publisher = CUP,
24.120 year = 1991}
24.121
24.122 @book{huet-plotkin93,
24.123 - editor = {{G\'erard} Huet and Gordon Plotkin},
24.124 + editor = {{G{\'e}rard} Huet and Gordon Plotkin},
24.125 title = {Logical Environments},
24.126 booktitle = {Logical Environments},
24.127 publisher = CUP,
24.128 @@ -1155,7 +1164,7 @@
24.129 series = {LNCS 780}}
24.130
24.131 @proceedings{colog88,
24.132 - editor = {P. Martin-L\"of and G. Mints},
24.133 + editor = {P. Martin-L{\"o}f and G. Mints},
24.134 title = {COLOG-88: International Conference on Computer Logic},
24.135 booktitle = {COLOG-88: International Conference on Computer Logic},
24.136 year = {Published 1990},
25.1 --- a/src/HOL/Gfp.ML Wed Oct 11 00:03:22 2000 +0200
25.2 +++ b/src/HOL/Gfp.ML Wed Oct 11 09:09:06 2000 +0200
25.3 @@ -32,7 +32,7 @@
25.4
25.5 Goal "mono(f) ==> gfp(f) = f(gfp(f))";
25.6 by (REPEAT (ares_tac [equalityI,gfp_lemma2,gfp_lemma3] 1));
25.7 -qed "gfp_Tarski";
25.8 +qed "gfp_unfold";
25.9
25.10 (*** Coinduction rules for greatest fixed points ***)
25.11
25.12 @@ -72,26 +72,26 @@
25.13 by (rtac (Un_least RS Un_least) 1);
25.14 by (rtac subset_refl 1);
25.15 by (assume_tac 1);
25.16 -by (rtac (gfp_Tarski RS equalityD1 RS subset_trans) 1);
25.17 +by (rtac (gfp_unfold RS equalityD1 RS subset_trans) 1);
25.18 by (assume_tac 1);
25.19 by (rtac monoD 1 THEN assume_tac 1);
25.20 -by (stac (coinduct3_mono_lemma RS lfp_Tarski) 1);
25.21 +by (stac (coinduct3_mono_lemma RS lfp_unfold) 1);
25.22 by Auto_tac;
25.23 qed "coinduct3_lemma";
25.24
25.25 Goal
25.26 "[| mono(f); a:X; X <= f(lfp(%x. f(x) Un X Un gfp(f))) |] ==> a : gfp(f)";
25.27 by (rtac (coinduct3_lemma RSN (2,weak_coinduct)) 1);
25.28 -by (resolve_tac [coinduct3_mono_lemma RS lfp_Tarski RS ssubst] 1);
25.29 +by (resolve_tac [coinduct3_mono_lemma RS lfp_unfold RS ssubst] 1);
25.30 by Auto_tac;
25.31 qed "coinduct3";
25.32
25.33
25.34 -(** Definition forms of gfp_Tarski and coinduct, to control unfolding **)
25.35 +(** Definition forms of gfp_unfold and coinduct, to control unfolding **)
25.36
25.37 Goal "[| A==gfp(f); mono(f) |] ==> A = f(A)";
25.38 -by (auto_tac (claset() addSIs [gfp_Tarski], simpset()));
25.39 -qed "def_gfp_Tarski";
25.40 +by (auto_tac (claset() addSIs [gfp_unfold], simpset()));
25.41 +qed "def_gfp_unfold";
25.42
25.43 Goal "[| A==gfp(f); mono(f); a:X; X <= f(X Un A) |] ==> a: A";
25.44 by (auto_tac (claset() addSIs [coinduct], simpset()));
26.1 --- a/src/HOL/IMP/Denotation.ML Wed Oct 11 00:03:22 2000 +0200
26.2 +++ b/src/HOL/IMP/Denotation.ML Wed Oct 11 09:09:06 2000 +0200
26.3 @@ -13,7 +13,7 @@
26.4
26.5 Goal "C(WHILE b DO c) = C(IF b THEN c;WHILE b DO c ELSE SKIP)";
26.6 by (Simp_tac 1);
26.7 -by (EVERY[stac (Gamma_mono RS lfp_Tarski) 1,
26.8 +by (EVERY[stac (Gamma_mono RS lfp_unfold) 1,
26.9 stac (Gamma_def RS meta_eq_to_obj_eq RS fun_cong) 1,
26.10 Simp_tac 1,
26.11 IF_UNSOLVED no_tac]);
26.12 @@ -29,9 +29,9 @@
26.13 by Auto_tac;
26.14 (* while *)
26.15 by (rewtac Gamma_def);
26.16 -by (stac (rewrite_rule [Gamma_def] (Gamma_mono RS lfp_Tarski)) 1);
26.17 +by (stac (rewrite_rule [Gamma_def] (Gamma_mono RS lfp_unfold)) 1);
26.18 by (Fast_tac 1);
26.19 -by (stac (rewrite_rule [Gamma_def] (Gamma_mono RS lfp_Tarski)) 1);
26.20 +by (stac (rewrite_rule [Gamma_def] (Gamma_mono RS lfp_unfold)) 1);
26.21 by (Fast_tac 1);
26.22
26.23 qed "com1";
27.1 --- a/src/HOL/IMP/Hoare.ML Wed Oct 11 00:03:22 2000 +0200
27.2 +++ b/src/HOL/IMP/Hoare.ML Wed Oct 11 09:09:06 2000 +0200
27.3 @@ -87,7 +87,7 @@
27.4 by (assume_tac 2);
27.5 by (etac induct2 1);
27.6 by (fast_tac (claset() addSIs [monoI]) 1);
27.7 -by (stac gfp_Tarski 1);
27.8 +by (stac gfp_unfold 1);
27.9 by (fast_tac (claset() addSIs [monoI]) 1);
27.10 by (Fast_tac 1);
27.11 qed "wp_While";
28.1 --- a/src/HOL/Induct/LList.ML Wed Oct 11 00:03:22 2000 +0200
28.2 +++ b/src/HOL/Induct/LList.ML Wed Oct 11 09:09:06 2000 +0200
28.3 @@ -309,7 +309,7 @@
28.4 qed "Lconst_fun_mono";
28.5
28.6 (* Lconst(M) = CONS M (Lconst M) *)
28.7 -bind_thm ("Lconst", (Lconst_fun_mono RS (Lconst_def RS def_lfp_Tarski)));
28.8 +bind_thm ("Lconst", (Lconst_fun_mono RS (Lconst_def RS def_lfp_unfold)));
28.9
28.10 (*A typical use of co-induction to show membership in the gfp.
28.11 The containing set is simply the singleton {Lconst(M)}. *)
28.12 @@ -330,7 +330,7 @@
28.13 Goal "gfp(%N. CONS M N) = LList_corec M (%x. Some(x,x))";
28.14 by (rtac (equals_LList_corec RS fun_cong) 1);
28.15 by (Simp_tac 1);
28.16 -by (rtac (Lconst_fun_mono RS gfp_Tarski) 1);
28.17 +by (rtac (Lconst_fun_mono RS gfp_unfold) 1);
28.18 qed "gfp_Lconst_eq_LList_corec";
28.19
28.20
29.1 --- a/src/HOL/Lfp.ML Wed Oct 11 00:03:22 2000 +0200
29.2 +++ b/src/HOL/Lfp.ML Wed Oct 11 09:09:06 2000 +0200
29.3 @@ -33,7 +33,7 @@
29.4
29.5 Goal "mono(f) ==> lfp(f) = f(lfp(f))";
29.6 by (REPEAT (ares_tac [equalityI,lfp_lemma2,lfp_lemma3] 1));
29.7 -qed "lfp_Tarski";
29.8 +qed "lfp_unfold";
29.9
29.10 (*** General induction rule for least fixed points ***)
29.11
29.12 @@ -56,8 +56,8 @@
29.13 (** Definition forms of lfp_Tarski and induct, to control unfolding **)
29.14
29.15 Goal "[| h==lfp(f); mono(f) |] ==> h = f(h)";
29.16 -by (auto_tac (claset() addSIs [lfp_Tarski], simpset()));
29.17 -qed "def_lfp_Tarski";
29.18 +by (auto_tac (claset() addSIs [lfp_unfold], simpset()));
29.19 +qed "def_lfp_unfold";
29.20
29.21 val rew::prems = Goal
29.22 "[| A == lfp(f); mono(f); a:A; \
30.1 --- a/src/HOL/NatDef.ML Wed Oct 11 00:03:22 2000 +0200
30.2 +++ b/src/HOL/NatDef.ML Wed Oct 11 09:09:06 2000 +0200
30.3 @@ -8,7 +8,7 @@
30.4 by (REPEAT (ares_tac [monoI, subset_refl, image_mono, Un_mono] 1));
30.5 qed "Nat_fun_mono";
30.6
30.7 -bind_thm ("Nat_unfold", Nat_fun_mono RS (Nat_def RS def_lfp_Tarski));
30.8 +bind_thm ("Nat_unfold", Nat_fun_mono RS (Nat_def RS def_lfp_unfold));
30.9
30.10 (* Zero is a natural number -- this also justifies the type definition*)
30.11 Goal "Zero_Rep: Nat";
31.1 --- a/src/HOL/Real/PNat.ML Wed Oct 11 00:03:22 2000 +0200
31.2 +++ b/src/HOL/Real/PNat.ML Wed Oct 11 09:09:06 2000 +0200
31.3 @@ -10,7 +10,7 @@
31.4 by (REPEAT (ares_tac [monoI, subset_refl, image_mono, Un_mono] 1));
31.5 qed "pnat_fun_mono";
31.6
31.7 -bind_thm ("pnat_unfold", pnat_fun_mono RS (pnat_def RS def_lfp_Tarski));
31.8 +bind_thm ("pnat_unfold", pnat_fun_mono RS (pnat_def RS def_lfp_unfold));
31.9
31.10 Goal "1 : pnat";
31.11 by (stac pnat_unfold 1);
32.1 --- a/src/HOL/Tools/inductive_package.ML Wed Oct 11 00:03:22 2000 +0200
32.2 +++ b/src/HOL/Tools/inductive_package.ML Wed Oct 11 09:09:06 2000 +0200
32.3 @@ -416,7 +416,7 @@
32.4 val _ = message " Proving the introduction rules ...";
32.5
32.6 val unfold = standard (mono RS (fp_def RS
32.7 - (if coind then def_gfp_Tarski else def_lfp_Tarski)));
32.8 + (if coind then def_gfp_unfold else def_lfp_unfold)));
32.9
32.10 fun select_disj 1 1 = []
32.11 | select_disj _ 1 = [rtac disjI1]
33.1 --- a/src/HOL/Trancl.ML Wed Oct 11 00:03:22 2000 +0200
33.2 +++ b/src/HOL/Trancl.ML Wed Oct 11 09:09:06 2000 +0200
33.3 @@ -15,7 +15,7 @@
33.4 by (REPEAT (ares_tac [monoI, subset_refl, comp_mono, Un_mono] 1));
33.5 qed "rtrancl_fun_mono";
33.6
33.7 -bind_thm ("rtrancl_unfold", rtrancl_fun_mono RS (rtrancl_def RS def_lfp_Tarski));
33.8 +bind_thm ("rtrancl_unfold", rtrancl_fun_mono RS (rtrancl_def RS def_lfp_unfold));
33.9
33.10 (*Reflexivity of rtrancl*)
33.11 Goal "(a,a) : r^*";
34.1 --- a/src/HOL/While.ML Wed Oct 11 00:03:22 2000 +0200
34.2 +++ b/src/HOL/While.ML Wed Oct 11 09:09:06 2000 +0200
34.3 @@ -54,7 +54,7 @@
34.4 \ ==> lfp f = fst(while (%(A,fA). A~=fA) (%(A,fA). (fA, f fA)) ({},f{}))";
34.5 by(res_inst_tac [("P","%(A,B).(A <= U & B = f A & A <= B & B <= lfp f)")]
34.6 while_rule 1);
34.7 - by(stac lfp_Tarski 1);
34.8 + by(stac lfp_unfold 1);
34.9 ba 1;
34.10 by(Clarsimp_tac 1);
34.11 by(blast_tac (claset() addDs [monoD]) 1);
34.12 @@ -67,7 +67,7 @@
34.13 by(clarsimp_tac (claset(),simpset() addsimps
34.14 [inv_image_def,finite_psubset_def,order_less_le]) 1);
34.15 by(blast_tac (claset() addSIs [finite_Diff] addDs [monoD]) 1);
34.16 -by(stac lfp_Tarski 1);
34.17 +by(stac lfp_unfold 1);
34.18 ba 1;
34.19 by(asm_simp_tac (simpset() addsimps [monoD]) 1);
34.20 qed "lfp_conv_while";
35.1 --- a/src/HOL/ex/set.ML Wed Oct 11 00:03:22 2000 +0200
35.2 +++ b/src/HOL/ex/set.ML Wed Oct 11 09:09:06 2000 +0200
35.3 @@ -102,7 +102,7 @@
35.4
35.5 Goal "EX X. X = - (g``(- (f``X)))";
35.6 by (rtac exI 1);
35.7 -by (rtac lfp_Tarski 1);
35.8 +by (rtac lfp_unfold 1);
35.9 by (REPEAT (ares_tac [monoI, image_mono, Compl_anti_mono] 1));
35.10 qed "decomposition";
35.11