1.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
1.2 +++ b/doc-src/CHANGES-93.txt Tue Nov 09 16:47:38 1993 +0100
1.3 @@ -0,0 +1,90 @@
1.4 +**** Isabelle-93 : a faster version of Isabelle-92 ****
1.5 +
1.6 +Isabelle now runs faster through a combination of improvements: pattern
1.7 +unification, discrimination nets and removal of assumptions during
1.8 +simplification. A new simplifier, although less flexible than the old one,
1.9 +runs many times faster for large subgoals. Classical reasoning
1.10 +(e.g. fast_tac) runs up to 30% faster when large numbers of rules are
1.11 +involved. Incompatibilities are few, and mostly concern the simplifier.
1.12 +
1.13 +THE SPEEDUPS
1.14 +
1.15 +The new simplifier is described in the Reference Manual, Chapter 8. See
1.16 +below for advice on converting.
1.17 +
1.18 +Pattern unification is completely invisible to users. It efficiently
1.19 +handles a common case of higher-order unification.
1.20 +
1.21 +Discrimination nets replace the old stringtrees. They provide fast lookup
1.22 +in a large set of rules for matching or unification. New "net" tactics
1.23 +replace the "compat_..." tactics based on stringtrees. Tactics
1.24 +biresolve_from_nets_tac, bimatch_from_nets_tac, resolve_from_net_tac and
1.25 +match_from_net_tac take a net, rather than a list of rules, and perform
1.26 +resolution or matching. Tactics net_biresolve_tac, net_bimatch_tac
1.27 +net_resolve_tac and net_match_tac take a list of rules, build a net
1.28 +(internally) and perform resolution or matching.
1.29 +
1.30 +The tactical METAHYPS, which allows a subgoal's hypotheses to be taken as a
1.31 +list of theorems, has been extended to handle unknowns (although not type
1.32 +unknowns). The simplification tactics now use METAHYPS to economise on
1.33 +storage consumption, and to avoid problems involving "parameters" bound in
1.34 +a subgoal. The modified simplifier now requires the auto_tac to take an
1.35 +extra argument: a list of theorems, which represents the assumptions of the
1.36 +current subgoal.
1.37 +
1.38 +OTHER CHANGES
1.39 +
1.40 +Apart from minor improvements in Pure Isabelle, the main other changes are
1.41 +extensions to object-logics. HOL now contains a treatment of co-induction
1.42 +and co-recursion, while ZF contains a formalization of equivalence classes,
1.43 +the integers and binary arithmetic. None of this material is documented.
1.44 +
1.45 +
1.46 +CONVERTING FROM THE OLD TO THE NEW SIMPLIFIER (for FOL/ZF/LCF/HOL)
1.47 +
1.48 +1. Run a crude shell script to convert your ML-files:
1.49 +
1.50 + change_simp *ML
1.51 +
1.52 +2. Most congruence rules are no longer needed. Hence you should remove all
1.53 +calls to mk_congs and mk_typed_congs (they no longer exist) and most
1.54 +occurrences of addcongs. The only exception are congruence rules for special
1.55 +constants like (in FOL)
1.56 +
1.57 +[| ?P <-> ?P'; ?P' ==> ?Q <-> ?Q' |] ==> ?P --> ?Q = ?P' --> ?Q'
1.58 +and
1.59 +[| A=A'; !!x. x:A' ==> P(x) <-> P'(x) |] ==>
1.60 +(ALL x:A. P(x)) <-> (ALL x:A'. P'(x))
1.61 +
1.62 +where some of the arguments are simplified under additional premises. Most
1.63 +likely you don't have any constructs like that, or they are already included
1.64 +in the basic simpset.
1.65 +
1.66 +3. In case you have used the addsplits facilities of the old simplifier:
1.67 +The case-splitting and simplification tactics have been separated. If you
1.68 +want to interleave case-splitting with simplification, you have do so
1.69 +explicitly by the following command:
1.70 +
1.71 +ss setloop (split_tac split_thms)
1.72 +
1.73 +where ss is a simpset and split_thms the list of case-splitting theorems.
1.74 +The shell script in step 1 tries to convert to from addsplits to setloop
1.75 +automatically.
1.76 +
1.77 +4. If you have used setauto, you have to change it to setsolver by hand.
1.78 +The solver is more delicate than the auto tactic used to be. For details see
1.79 +the full description of the new simplifier. One very slight incompatibility
1.80 +is the fact that the old auto tactic could sometimes see premises as part of
1.81 +the proof state, whereas now they are always passed explicit as arguments.
1.82 +
1.83 +5. It is quite likely that a few proofs require further hand massaging.
1.84 +
1.85 +Known incompatibilities:
1.86 +- Applying a rewrite rule cannot instantiate a subgoal. This rules out
1.87 +solving a premise of a conditional rewrite rule with extra unknowns by
1.88 +rewriting. Only the solver can instantiate.
1.89 +
1.90 +Known bugs:
1.91 +- The names of bound variables can be changed by the simplifier.
1.92 +
1.93 +
2.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
2.2 +++ b/doc-src/Errata.txt Tue Nov 09 16:47:38 1993 +0100
2.3 @@ -0,0 +1,27 @@
2.4 +ERRATA FOR ISABELLE MANUAL
2.5 +
2.6 +** THM : BASIC INFERENCE **
2.7 +
2.8 +Pure/tactic/lift_inst_rule: now checks for distinct parameters (could also
2.9 +compare with free variable names, though). Variables in the insts are now
2.10 +lifted over all parameters; their index is also increased. Type vars in
2.11 +the lhs variables are also increased by maxidx+1; this is essential for HOL
2.12 +examples to work.
2.13 +
2.14 +
2.15 +** THEORY MATTERS (GENERAL) **
2.16 +
2.17 +Definitions: users must ensure that the left-hand side is nothing
2.18 +more complex than a function application -- never using fancy syntax. E.g.
2.19 +never
2.20 +> ("the_def", "THE y. P(y) == Union({y . x:{0}, P(y)})" ),
2.21 +but
2.22 +< ("the_def", "The(P) == Union({y . x:{0}, P(y)})" ),
2.23 +
2.24 +Provers/classical, simp (new simplifier), tsimp (old simplifier), ind
2.25 +
2.26 +** SYSTEMS MATTERS **
2.27 +
2.28 +explain make system? maketest
2.29 +
2.30 +expandshort
3.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
3.2 +++ b/doc-src/MacroHints Tue Nov 09 16:47:38 1993 +0100
3.3 @@ -0,0 +1,142 @@
3.4 +
3.5 +Hints
3.6 +=====
3.7 +
3.8 +This is an incomprehensive list of facts about the new version of the syntax
3.9 +module (the macro system).
3.10 +
3.11 +
3.12 +- Syntax of "translations" section is list of <xrule> where: (metachars: [ | ])
3.13 +
3.14 + <xrule> ::= [(<id>)] <string> [ => | <= | == ] [(<id>)] <string>
3.15 +
3.16 + One such line specifies a parse rule (=>) or a print rule (<=) or both (==).
3.17 + The optional <id> before each <string> specifies the name of the syntactic
3.18 + category to be used for parsing the string; the default is logic. Note that
3.19 + this has no influence on the applicability of rules.
3.20 +
3.21 + Example:
3.22 +
3.23 + translations
3.24 + (prop) "x:X" == (prop) "|- x:X"
3.25 + "Lam x:A.B" == "Abs(A, %x.B)"
3.26 + "Pi x:A.B" => "Prod(A, %x.B)"
3.27 +
3.28 + All rules of a theory will be shown in their internal (ast * ast) form by
3.29 + Syntax.print_trans.
3.30 +
3.31 +- Caveat: rewrite rules know no "context" nor "semantics", e.g. something
3.32 + like "('a, Nil)Cons", "% Nil Cons. Cons(a, Nil)" or "Cons(a, [])" will be
3.33 + rewritten by the rules "[x]" <= "Cons(x, [])", "[]" <= "Nil"; (this is a
3.34 + general problem with macro systems);
3.35 +
3.36 +- syn_of: theory -> Syntax.syntax
3.37 +
3.38 +- Syntax.print_gram shows grammer of syntax
3.39 +
3.40 +- Syntax.print_trans shows translations of syntax
3.41 +
3.42 +- Syntax.print_syntax shows grammer and translations of syntax
3.43 +
3.44 +- Ast.stat_normalize := true enables output of statistics after normalizing;
3.45 +
3.46 +- Ast.trace_normalize := true enables verbose output while normalizing and
3.47 + statistics;
3.48 +
3.49 +- eta_contract := false disables eta contraction when printing terms;
3.50 +
3.51 +- asts: (see also Pure/Syntax/ast.ML *)
3.52 +
3.53 + (*Asts come in two flavours:
3.54 + - proper asts representing terms and types: Variables are treated like
3.55 + Constants;
3.56 + - patterns used as lhs and rhs in rules: Variables are placeholders for
3.57 + proper asts.*)
3.58 +
3.59 + datatype ast =
3.60 + Constant of string | (* "not", "_%", "fun" *)
3.61 + Variable of string | (* x, ?x, 'a, ?'a *)
3.62 + Appl of ast list; (* (f x y z), ("fun" 'a 'b) *)
3.63 +
3.64 + (*the list of subasts of an Appl node has to contain at least 2 elements, i.e.
3.65 + there are no empty asts or nullary applications; use mk_appl for convenience*)
3.66 +
3.67 +- ast output style:
3.68 + Constant a -> "a"
3.69 + Variable a -> a
3.70 + Appl [ast1, ..., astn] -> (ast1 ... astn)
3.71 +
3.72 +- sext: (see also Pure/Syntax/sextension.ML)
3.73 +
3.74 + (** datatype sext **)
3.75 +
3.76 + datatype xrule =
3.77 + op |-> of (string * string) * (string * string) |
3.78 + op <-| of (string * string) * (string * string) |
3.79 + op <-> of (string * string) * (string * string);
3.80 +
3.81 + datatype sext =
3.82 + Sext of {
3.83 + mixfix: mixfix list,
3.84 + parse_translation: (string * (term list -> term)) list,
3.85 + print_translation: (string * (term list -> term)) list} |
3.86 + NewSext of {
3.87 + mixfix: mixfix list,
3.88 + xrules: xrule list,
3.89 + parse_ast_translation: (string * (ast list -> ast)) list,
3.90 + parse_preproc: (ast -> ast) option,
3.91 + parse_postproc: (ast -> ast) option,
3.92 + parse_translation: (string * (term list -> term)) list,
3.93 + print_translation: (string * (term list -> term)) list,
3.94 + print_preproc: (ast -> ast) option,
3.95 + print_postproc: (ast -> ast) option,
3.96 + print_ast_translation: (string * (ast list -> ast)) list};
3.97 +
3.98 +- local (thy, ext) order of rules is preserved, global (merge) order is
3.99 + unspecified;
3.100 +
3.101 +- read asts contain a mixture of Constant and Variable atoms (some
3.102 + Variables become Const later);
3.103 +
3.104 +- *.thy-file/ML-section: all declarations will be exported, therefore
3.105 + one should use local-in-end constructs where appropriate; especially
3.106 + the examples in Logics/Defining could be better behaved;
3.107 +
3.108 +- [document the asts generated by the standard syntactic categories
3.109 + (idt, idts, args, ...)];
3.110 +
3.111 +- print(_ast)_translation: the constant has to disappear or execption
3.112 + Match raised, otherwise the printer will not terminate;
3.113 +
3.114 +- binders are implemented traditionally, i.e. as parse/print_translations
3.115 + (compatibility, alpha, eta, HOL hack easy);
3.116 +
3.117 +- changes to the term/ast "parsetree" structure: renamed most constants
3.118 + (_args, _idts, ...), unified typ applications, _tapp/_tappl, type atoms
3.119 + no Const rather than Free;
3.120 +
3.121 +- misfeature: eta-contraction before rewriting therefore bounded quantifiers,
3.122 + Collect etc. may fall back to internal form;
3.123 +
3.124 +- changes and fixes to printing of types:
3.125 +
3.126 + "'a => 'b => 'c" now printed as "['a,'b] => 'c";
3.127 +
3.128 + constraints now printed iff not dummyT and show_types enabled, changed
3.129 + internal print_translations accordingly; old translations that intruduce
3.130 + frees may cause printing of constraints at all occurences;
3.131 +
3.132 + constraints of bound vars now printed where bound (i.e. "%x::'a. x" rather
3.133 + than "%x. x::'a") and put in braces if necessary ("%(x::'a) y::'b. f(x,y)");
3.134 +
3.135 + constraints of bound var printed even if a free var in another scope happens
3.136 + to have the same name and type;
3.137 +
3.138 +- [man: toplevel pretty printers for NJ];
3.139 +
3.140 +- (syntactic constants "_..." (e.g. "_idts") are reserved, you may use "@..."
3.141 + or "*..." instead);
3.142 +
3.143 +- Printer: clash of fun/type constants with concrete syntax type/fun
3.144 + constants causes incorrect printing of of applications;
3.145 +
4.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
4.2 +++ b/doc-src/ind-defs.tex Tue Nov 09 16:47:38 1993 +0100
4.3 @@ -0,0 +1,1585 @@
4.4 +\documentstyle[11pt,a4,proof,lcp,alltt,amssymbols,draft]{article}
4.5 +\newif\ifCADE
4.6 +\CADEfalse
4.7 +
4.8 +\title{A Fixedpoint Approach to Implementing (Co-)Inductive Definitions\\
4.9 + DRAFT\thanks{Research funded by the SERC (grants GR/G53279,
4.10 + GR/H40570) and by the ESPRIT Basic Research Action 6453 `Types'.}}
4.11 +
4.12 +\author{{\em Lawrence C. Paulson}\\
4.13 + Computer Laboratory, University of Cambridge}
4.14 +\date{\today}
4.15 +\setcounter{secnumdepth}{2} \setcounter{tocdepth}{2}
4.16 +
4.17 +\def\picture #1 by #2 (#3){% pictures: width by height (name)
4.18 + \message{Picture #3}
4.19 + \vbox to #2{\hrule width #1 height 0pt depth 0pt
4.20 + \vfill \special{picture #3}}}
4.21 +
4.22 +
4.23 +\newcommand\sbs{\subseteq}
4.24 +\newcommand\List[1]{\lbrakk#1\rbrakk}
4.25 +\let\To=\Rightarrow
4.26 +\newcommand\Var[1]{{?\!#1}}
4.27 +
4.28 +
4.29 +%%%\newcommand\Pow{{\tt Pow}}
4.30 +\let\pow=\wp
4.31 +\newcommand\RepFun{{\tt RepFun}}
4.32 +\newcommand\pair[1]{\langle#1\rangle}
4.33 +\newcommand\cons{{\tt cons}}
4.34 +\def\succ{{\tt succ}}
4.35 +\newcommand\split{{\tt split}}
4.36 +\newcommand\fst{{\tt fst}}
4.37 +\newcommand\snd{{\tt snd}}
4.38 +\newcommand\converse{{\tt converse}}
4.39 +\newcommand\domain{{\tt domain}}
4.40 +\newcommand\range{{\tt range}}
4.41 +\newcommand\field{{\tt field}}
4.42 +\newcommand\bndmono{\hbox{\tt bnd\_mono}}
4.43 +\newcommand\lfp{{\tt lfp}}
4.44 +\newcommand\gfp{{\tt gfp}}
4.45 +\newcommand\id{{\tt id}}
4.46 +\newcommand\trans{{\tt trans}}
4.47 +\newcommand\wf{{\tt wf}}
4.48 +\newcommand\wfrec{\hbox{\tt wfrec}}
4.49 +\newcommand\nat{{\tt nat}}
4.50 +\newcommand\natcase{\hbox{\tt nat\_case}}
4.51 +\newcommand\transrec{{\tt transrec}}
4.52 +\newcommand\rank{{\tt rank}}
4.53 +\newcommand\univ{{\tt univ}}
4.54 +\newcommand\Vrec{{\tt Vrec}}
4.55 +\newcommand\Inl{{\tt Inl}}
4.56 +\newcommand\Inr{{\tt Inr}}
4.57 +\newcommand\case{{\tt case}}
4.58 +\newcommand\lst{{\tt list}}
4.59 +\newcommand\Nil{{\tt Nil}}
4.60 +\newcommand\Cons{{\tt Cons}}
4.61 +\newcommand\lstcase{\hbox{\tt list\_case}}
4.62 +\newcommand\lstrec{\hbox{\tt list\_rec}}
4.63 +\newcommand\length{{\tt length}}
4.64 +\newcommand\listn{{\tt listn}}
4.65 +\newcommand\acc{{\tt acc}}
4.66 +\newcommand\primrec{{\tt primrec}}
4.67 +\newcommand\SC{{\tt SC}}
4.68 +\newcommand\CONST{{\tt CONST}}
4.69 +\newcommand\PROJ{{\tt PROJ}}
4.70 +\newcommand\COMP{{\tt COMP}}
4.71 +\newcommand\PREC{{\tt PREC}}
4.72 +
4.73 +\newcommand\quniv{{\tt quniv}}
4.74 +\newcommand\llist{{\tt llist}}
4.75 +\newcommand\LNil{{\tt LNil}}
4.76 +\newcommand\LCons{{\tt LCons}}
4.77 +\newcommand\lconst{{\tt lconst}}
4.78 +\newcommand\lleq{{\tt lleq}}
4.79 +\newcommand\map{{\tt map}}
4.80 +\newcommand\term{{\tt term}}
4.81 +\newcommand\Apply{{\tt Apply}}
4.82 +\newcommand\termcase{{\tt term\_case}}
4.83 +\newcommand\rev{{\tt rev}}
4.84 +\newcommand\reflect{{\tt reflect}}
4.85 +\newcommand\tree{{\tt tree}}
4.86 +\newcommand\forest{{\tt forest}}
4.87 +\newcommand\Part{{\tt Part}}
4.88 +\newcommand\TF{{\tt tree\_forest}}
4.89 +\newcommand\Tcons{{\tt Tcons}}
4.90 +\newcommand\Fcons{{\tt Fcons}}
4.91 +\newcommand\Fnil{{\tt Fnil}}
4.92 +\newcommand\TFcase{\hbox{\tt TF\_case}}
4.93 +\newcommand\Fin{{\tt Fin}}
4.94 +\newcommand\QInl{{\tt QInl}}
4.95 +\newcommand\QInr{{\tt QInr}}
4.96 +\newcommand\qsplit{{\tt qsplit}}
4.97 +\newcommand\qcase{{\tt qcase}}
4.98 +\newcommand\Con{{\tt Con}}
4.99 +\newcommand\data{{\tt data}}
4.100 +
4.101 +\sloppy
4.102 +\binperiod %%%treat . like a binary operator
4.103 +
4.104 +\begin{document}
4.105 +\pagestyle{empty}
4.106 +\begin{titlepage}
4.107 +\maketitle
4.108 +\begin{abstract}
4.109 + Several theorem provers provide commands for formalizing recursive
4.110 + datatypes and/or inductively defined sets. This paper presents a new
4.111 + approach, based on fixedpoint definitions. It is unusually general:
4.112 + it admits all monotone inductive definitions. It is conceptually simple,
4.113 + which has allowed the easy implementation of mutual recursion and other
4.114 + conveniences. It also handles co-inductive definitions: simply replace
4.115 + the least fixedpoint by a greatest fixedpoint. This represents the first
4.116 + automated support for co-inductive definitions.
4.117 +
4.118 + Examples include lists of $n$ elements, the accessible part of a relation
4.119 + and the set of primitive recursive functions. One example of a
4.120 + co-inductive definition is bisimulations for lazy lists. \ifCADE\else
4.121 + Recursive datatypes are examined in detail, as well as one example of a
4.122 + ``co-datatype'': lazy lists. The appendices are simple user's manuals
4.123 + for this Isabelle/ZF package.\fi
4.124 +
4.125 + The method has been implemented in Isabelle's ZF set theory. It should
4.126 + be applicable to any logic in which the Knaster-Tarski Theorem can be
4.127 + proved. The paper briefly describes a method of formalizing
4.128 + non-well-founded data structures in standard ZF set theory.
4.129 +\end{abstract}
4.130 +%
4.131 +\begin{center} Copyright \copyright{} \number\year{} by Lawrence C. Paulson
4.132 +\end{center}
4.133 +\thispagestyle{empty}
4.134 +\end{titlepage}
4.135 +
4.136 +\tableofcontents
4.137 +\cleardoublepage
4.138 +\pagenumbering{arabic}\pagestyle{headings}\DRAFT
4.139 +
4.140 +\section{Introduction}
4.141 +Several theorem provers provide commands for formalizing recursive data
4.142 +structures, like lists and trees. Examples include Boyer and Moore's shell
4.143 +principle~\cite{bm79} and Melham's recursive type package for the HOL
4.144 +system~\cite{melham89}. Such data structures are called {\bf datatypes}
4.145 +below, by analogy with {\tt datatype} definitions in Standard~ML\@.
4.146 +
4.147 +A datatype is but one example of a {\bf inductive definition}. This
4.148 +specifies the least set closed under given rules~\cite{aczel77}. The
4.149 +collection of theorems in a logic is inductively defined. A structural
4.150 +operational semantics~\cite{hennessy90} is an inductive definition of a
4.151 +reduction or evaluation relation on programs. A few theorem provers
4.152 +provide commands for formalizing inductive definitions; these include
4.153 +Coq~\cite{paulin92} and again the HOL system~\cite{camilleri92}.
4.154 +
4.155 +The dual notion is that of a {\bf co-inductive definition}. This specifies
4.156 +the greatest set closed under given rules. Important examples include
4.157 +using bisimulation relations to formalize equivalence of
4.158 +processes~\cite{milner89} or lazy functional programs~\cite{abramsky90}.
4.159 +Other examples include lazy lists and other infinite data structures; these
4.160 +are called {\bf co-datatypes} below.
4.161 +
4.162 +Most existing implementations of datatype and inductive definitions accept
4.163 +an artifically narrow class of inputs, and are not easily extended. The
4.164 +shell principle and Coq's inductive definition rules are built into the
4.165 +underlying logic. Melham's packages derive datatypes and inductive
4.166 +definitions from specialized constructions in higher-order logic.
4.167 +
4.168 +This paper describes a package based on a fixedpoint approach. Least
4.169 +fixedpoints yield inductive definitions; greatest fixedpoints yield
4.170 +co-inductive definitions. The package is uniquely powerful:
4.171 +\begin{itemize}
4.172 +\item It accepts the largest natural class of inductive definitions, namely
4.173 + all monotone inductive definitions.
4.174 +\item It accepts a wide class of datatype definitions.
4.175 +\item It handles co-inductive and co-datatype definitions. Most of
4.176 + the discussion below applies equally to inductive and co-inductive
4.177 + definitions, and most of the code is shared. To my knowledge, this is
4.178 + the only package supporting co-inductive definitions.
4.179 +\item Definitions may be mutually recursive.
4.180 +\end{itemize}
4.181 +The package is implemented in Isabelle~\cite{isabelle-intro}, using ZF set
4.182 +theory \cite{paulson-set-I,paulson-set-II}. However, the fixedpoint
4.183 +approach is independent of Isabelle. The recursion equations are specified
4.184 +as introduction rules for the mutually recursive sets. The package
4.185 +transforms these rules into a mapping over sets, and attempts to prove that
4.186 +the mapping is monotonic and well-typed. If successful, the package
4.187 +makes fixedpoint definitions and proves the introduction, elimination and
4.188 +(co-)induction rules. The package consists of several Standard ML
4.189 +functors~\cite{paulson91}; it accepts its argument and returns its result
4.190 +as ML structures.
4.191 +
4.192 +Most datatype packages equip the new datatype with some means of expressing
4.193 +recursive functions. This is the main thing lacking from my package. Its
4.194 +fixedpoint operators define recursive sets, not recursive functions. But
4.195 +the Isabelle/ZF theory provides well-founded recursion and other logical
4.196 +tools to simplify this task~\cite{paulson-set-II}.
4.197 +
4.198 +\S2 briefly introduces the least and greatest fixedpoint operators. \S3
4.199 +discusses the form of introduction rules, mutual recursion and other points
4.200 +common to inductive and co-inductive definitions. \S4 discusses induction
4.201 +and co-induction rules separately. \S5 presents several examples,
4.202 +including a co-inductive definition. \S6 describes datatype definitions,
4.203 +while \S7 draws brief conclusions. \ifCADE\else The appendices are simple
4.204 +user's manuals for this Isabelle/ZF package.\fi
4.205 +
4.206 +Most of the definitions and theorems shown below have been generated by the
4.207 +package. I have renamed some variables to improve readability.
4.208 +
4.209 +\section{Fixedpoint operators}
4.210 +In set theory, the least and greatest fixedpoint operators are defined as
4.211 +follows:
4.212 +\begin{eqnarray*}
4.213 + \lfp(D,h) & \equiv & \inter\{X\sbs D. h(X)\sbs X\} \\
4.214 + \gfp(D,h) & \equiv & \union\{X\sbs D. X\sbs h(X)\}
4.215 +\end{eqnarray*}
4.216 +Say that $h$ is {\bf bounded by}~$D$ if $h(D)\sbs D$, and {\bf monotone} if
4.217 +$h(A)\sbs h(B)$ for all $A$ and $B$ such that $A\sbs B\sbs D$. If $h$ is
4.218 +bounded by~$D$ and monotone then both operators yield fixedpoints:
4.219 +\begin{eqnarray*}
4.220 + \lfp(D,h) & = & h(\lfp(D,h)) \\
4.221 + \gfp(D,h) & = & h(\gfp(D,h))
4.222 +\end{eqnarray*}
4.223 +These equations are instances of the Knaster-Tarski theorem, which states
4.224 +that every monotonic function over a complete lattice has a
4.225 +fixedpoint~\cite{davey&priestley}. It is obvious from their definitions
4.226 +that $\lfp$ must be the least fixedpoint, and $\gfp$ the greatest.
4.227 +
4.228 +This fixedpoint theory is simple. The Knaster-Tarski theorem is easy to
4.229 +prove. Showing monotonicity of~$h$ is trivial, in typical cases. We must
4.230 +also exhibit a bounding set~$D$ for~$h$. Sometimes this is trivial, as
4.231 +when a set of ``theorems'' is (co-)inductively defined over some previously
4.232 +existing set of ``formulae.'' But defining the bounding set for
4.233 +(co-)datatype definitions requires some effort; see~\S\ref{data-sec} below.
4.234 +
4.235 +
4.236 +\section{Elements of an inductive or co-inductive definition}\label{basic-sec}
4.237 +Consider a (co-)inductive definition of the sets $R_1$, \ldots,~$R_n$, in
4.238 +mutual recursion. They will be constructed from previously existing sets
4.239 +$D_1$, \ldots,~$D_n$, respectively, which are called their {\bf domains}.
4.240 +The construction yields not $R_i\sbs D_i$ but $R_i\sbs D_1+\cdots+D_n$, where
4.241 +$R_i$ is the image of~$D_i$ under an injection~\cite[\S4.5]{paulson-set-II}.
4.242 +
4.243 +The definition may involve arbitrary parameters $\vec{p}=p_1$,
4.244 +\ldots,~$p_k$. Each recursive set then has the form $R_i(\vec{p})$. The
4.245 +parameters must be identical every time they occur within a definition. This
4.246 +would appear to be a serious restriction compared with other systems such as
4.247 +Coq~\cite{paulin92}. For instance, we cannot define the lists of
4.248 +$n$ elements as the set $\listn(A,n)$ using rules where the parameter~$n$
4.249 +varies. \S\ref{listn-sec} describes how to express this definition using the
4.250 +package.
4.251 +
4.252 +To avoid clutter below, the recursive sets are shown as simply $R_i$
4.253 +instead of $R_i(\vec{p})$.
4.254 +
4.255 +\subsection{The form of the introduction rules}\label{intro-sec}
4.256 +The body of the definition consists of the desired introduction rules,
4.257 +specified as strings. The conclusion of each rule must have the form $t\in
4.258 +R_i$, where $t$ is any term. Premises typically have the same form, but
4.259 +they can have the more general form $t\in M(R_i)$ or express arbitrary
4.260 +side-conditions.
4.261 +
4.262 +The premise $t\in M(R_i)$ is permitted if $M$ is a monotonic operator on
4.263 +sets, satisfying the rule
4.264 +\[ \infer{M(A)\sbs M(B)}{A\sbs B} \]
4.265 +The inductive definition package must be supplied monotonicity rules for
4.266 +all such premises.
4.267 +
4.268 +Because any monotonic $M$ may appear in premises, the criteria for an
4.269 +acceptable definition is semantic rather than syntactic. A suitable choice
4.270 +of~$M$ and~$t$ can express a lot. The powerset operator $\pow$ is
4.271 +monotone, and the premise $t\in\pow(R)$ expresses $t\sbs R$; see
4.272 +\S\ref{acc-sec} for an example. The `list of' operator is monotone, and
4.273 +the premise $t\in\lst(R)$ avoids having to encode the effect of~$\lst(R)$
4.274 +using mutual recursion; see \S\ref{primrec-sec} and also my earlier
4.275 +paper~\cite[\S4.4]{paulson-set-II}.
4.276 +
4.277 +Introduction rules may also contain {\bf side-conditions}. These are
4.278 +premises consisting of arbitrary formulae not mentioning the recursive
4.279 +sets. Side-conditions typically involve type-checking. One example is the
4.280 +premise $a\in A$ in the following rule from the definition of lists:
4.281 +\[ \infer{\Cons(a,l)\in\lst(A)}{a\in A & l\in\lst(A)} \]
4.282 +
4.283 +\subsection{The fixedpoint definitions}
4.284 +The package translates the list of desired introduction rules into a fixedpoint
4.285 +definition. Consider, as a running example, the finite set operator
4.286 +$\Fin(A)$: the set of all finite subsets of~$A$. It can be
4.287 +defined as the least set closed under the rules
4.288 +\[ \emptyset\in\Fin(A) \qquad
4.289 + \infer{\{a\}\un b\in\Fin(A)}{a\in A & b\in\Fin(A)}
4.290 +\]
4.291 +
4.292 +The domain for a (co-)inductive definition must be some existing set closed
4.293 +under the rules. A suitable domain for $\Fin(A)$ is $\pow(A)$, the set of all
4.294 +subsets of~$A$. The package generates the definition
4.295 +\begin{eqnarray*}
4.296 + \Fin(A) & \equiv & \lfp(\pow(A), \;
4.297 + \begin{array}[t]{r@{\,}l}
4.298 + \lambda X. \{z\in\pow(A). & z=\emptyset \disj{} \\
4.299 + &(\exists a\,b. z=\{a\}\un b\conj a\in A\conj b\in X)\})
4.300 + \end{array}
4.301 +\end{eqnarray*}
4.302 +The contribution of each rule to the definition of $\Fin(A)$ should be
4.303 +obvious. A co-inductive definition is similar but uses $\gfp$ instead
4.304 +of~$\lfp$.
4.305 +
4.306 +The package must prove that the fixedpoint operator is applied to a
4.307 +monotonic function. If the introduction rules have the form described
4.308 +above, and if the package is supplied a monotonicity theorem for every
4.309 +$t\in M(R_i)$ premise, then this proof is trivial.\footnote{Due to the
4.310 + presence of logical connectives in the fixedpoint's body, the
4.311 + monotonicity proof requires some unusual rules. These state that the
4.312 + connectives $\conj$, $\disj$ and $\exists$ are monotonic with respect to
4.313 + the partial ordering on unary predicates given by $P\sqsubseteq Q$ if and
4.314 + only if $\forall x.P(x)\imp Q(x)$.}
4.315 +
4.316 +The result structure returns the definitions of the recursive sets as a theorem
4.317 +list called {\tt defs}. It also returns, as the theorem {\tt unfold}, a
4.318 +fixedpoint equation such as
4.319 +\begin{eqnarray*}
4.320 + \Fin(A) & = &
4.321 + \begin{array}[t]{r@{\,}l}
4.322 + \{z\in\pow(A). & z=\emptyset \disj{} \\
4.323 + &(\exists a\,b. z=\{a\}\un b\conj a\in A\conj b\in \Fin(A))\}
4.324 + \end{array}
4.325 +\end{eqnarray*}
4.326 +It also returns, as the theorem {\tt dom\_subset}, an inclusion such as
4.327 +$\Fin(A)\sbs\pow(A)$.
4.328 +
4.329 +
4.330 +\subsection{Mutual recursion} \label{mutual-sec}
4.331 +In a mutually recursive definition, the domain for the fixedoint construction
4.332 +is the disjoint sum of the domain~$D_i$ of each~$R_i$, for $i=1$,
4.333 +\ldots,~$n$. The package uses the injections of the
4.334 +binary disjoint sum, typically $\Inl$ and~$\Inr$, to express injections
4.335 +$h_{1,n}$, \ldots, $h_{n,n}$ for the $n$-ary disjoint sum $D_1+\cdots+D_n$.
4.336 +
4.337 +As discussed elsewhere \cite[\S4.5]{paulson-set-II}, Isabelle/ZF defines the
4.338 +operator $\Part$ to support mutual recursion. The set $\Part(A,h)$
4.339 +contains those elements of~$A$ having the form~$h(z)$:
4.340 +\begin{eqnarray*}
4.341 + \Part(A,h) & \equiv & \{x\in A. \exists z. x=h(z)\}.
4.342 +\end{eqnarray*}
4.343 +For mutually recursive sets $R_1$, \ldots,~$R_n$ with
4.344 +$n>1$, the package makes $n+1$ definitions. The first defines a set $R$ using
4.345 +a fixedpoint operator. The remaining $n$ definitions have the form
4.346 +\begin{eqnarray*}
4.347 + R_i & \equiv & \Part(R,h_{i,n}), \qquad i=1,\ldots, n.
4.348 +\end{eqnarray*}
4.349 +It follows that $R=R_1\un\cdots\un R_n$, where the $R_i$ are pairwise disjoint.
4.350 +
4.351 +
4.352 +\subsection{Proving the introduction rules}
4.353 +The uesr supplies the package with the desired form of the introduction
4.354 +rules. Once it has derived the theorem {\tt unfold}, it attempts
4.355 +to prove the introduction rules. From the user's point of view, this is the
4.356 +trickiest stage; the proofs often fail. The task is to show that the domain
4.357 +$D_1+\cdots+D_n$ of the combined set $R_1\un\cdots\un R_n$ is
4.358 +closed under all the introduction rules. This essentially involves replacing
4.359 +each~$R_i$ by $D_1+\cdots+D_n$ in each of the introduction rules and
4.360 +attempting to prove the result.
4.361 +
4.362 +Consider the $\Fin(A)$ example. After substituting $\pow(A)$ for $\Fin(A)$
4.363 +in the rules, the package must prove
4.364 +\[ \emptyset\in\pow(A) \qquad
4.365 + \infer{\{a\}\un b\in\pow(A)}{a\in A & b\in\pow(A)}
4.366 +\]
4.367 +Such proofs can be regarded as type-checking the definition. The user
4.368 +supplies the package with type-checking rules to apply. Usually these are
4.369 +general purpose rules from the ZF theory. They could however be rules
4.370 +specifically proved for a particular inductive definition; sometimes this is
4.371 +the easiest way to get the definition through!
4.372 +
4.373 +The package returns the introduction rules as the theorem list {\tt intrs}.
4.374 +
4.375 +\subsection{The elimination rule}
4.376 +The elimination rule, called {\tt elim}, is derived in a straightforward
4.377 +manner. Applying the rule performs a case analysis, with one case for each
4.378 +introduction rule. Continuing our example, the elimination rule for $\Fin(A)$
4.379 +is
4.380 +\[ \infer{Q}{x\in\Fin(A) & \infer*{Q}{[x=\emptyset]}
4.381 + & \infer*{Q}{[x=\{a\}\un b & a\in A &b\in\Fin(A)]_{a,b}} }
4.382 +\]
4.383 +The package also returns a function {\tt mk\_cases}, for generating simplified
4.384 +instances of the elimination rule. However, {\tt mk\_cases} only works for
4.385 +datatypes and for inductive definitions involving datatypes, such as an
4.386 +inductively defined relation between lists. It instantiates {\tt elim}
4.387 +with a user-supplied term, then simplifies the cases using the freeness of
4.388 +the underlying datatype.
4.389 +
4.390 +
4.391 +\section{Induction and co-induction rules}
4.392 +Here we must consider inductive and co-inductive definitions separately.
4.393 +For an inductive definition, the package returns an induction rule derived
4.394 +directly from the properties of least fixedpoints, as well as a modified
4.395 +rule for mutual recursion and inductively defined relations. For a
4.396 +co-inductive definition, the package returns a basic co-induction rule.
4.397 +
4.398 +\subsection{The basic induction rule}\label{basic-ind-sec}
4.399 +The basic rule, called simply {\tt induct}, is appropriate in most situations.
4.400 +For inductive definitions, it is strong rule induction~\cite{camilleri92}; for
4.401 +datatype definitions (see below), it is just structural induction.
4.402 +
4.403 +The induction rule for an inductively defined set~$R$ has the following form.
4.404 +The major premise is $x\in R$. There is a minor premise for each
4.405 +introduction rule:
4.406 +\begin{itemize}
4.407 +\item If the introduction rule concludes $t\in R_i$, then the minor premise
4.408 +is~$P(t)$.
4.409 +
4.410 +\item The minor premise's eigenvariables are precisely the introduction
4.411 +rule's free variables that are not parameters of~$R$ --- for instance, $A$
4.412 +is not an eigenvariable in the $\Fin(A)$ rule below.
4.413 +
4.414 +\item If the introduction rule has a premise $t\in R_i$, then the minor
4.415 +premise discharges the assumption $t\in R_i$ and the induction
4.416 +hypothesis~$P(t)$. If the introduction rule has a premise $t\in M(R_i)$
4.417 +then the minor premise discharges the single assumption
4.418 +\[ t\in M(\{z\in R_i. P(z)\}). \]
4.419 +Because $M$ is monotonic, this assumption implies $t\in M(R_i)$. The
4.420 +occurrence of $P$ gives the effect of an induction hypothesis, which may be
4.421 +exploited by appealing to properties of~$M$.
4.422 +\end{itemize}
4.423 +The rule for $\Fin(A)$ is
4.424 +\[ \infer{P(x)}{x\in\Fin(A) & P(\emptyset)
4.425 + & \infer*{P(\{a\}\un b)}{[a\in A & b\in\Fin(A) & P(b)]_{a,b}} }
4.426 +\]
4.427 +Stronger induction rules often suggest themselves. In the case of
4.428 +$\Fin(A)$, the Isabelle/ZF theory proceeds to derive a rule whose third
4.429 +premise discharges the extra assumption $a\not\in b$. Most special induction
4.430 +rules must be proved manually, but the package proves a rule for mutual
4.431 +induction and inductive relations.
4.432 +
4.433 +\subsection{Mutual induction}
4.434 +The mutual induction rule is called {\tt
4.435 +mutual\_induct}. It differs from the basic rule in several respects:
4.436 +\begin{itemize}
4.437 +\item Instead of a single predicate~$P$, it uses $n$ predicates $P_1$,
4.438 +\ldots,~$P_n$: one for each recursive set.
4.439 +
4.440 +\item There is no major premise such as $x\in R_i$. Instead, the conclusion
4.441 +refers to all the recursive sets:
4.442 +\[ (\forall z.z\in R_1\imp P_1(z))\conj\cdots\conj
4.443 + (\forall z.z\in R_n\imp P_n(z))
4.444 +\]
4.445 +Proving the premises simultaneously establishes $P_i(z)$ for $z\in
4.446 +R_i$ and $i=1$, \ldots,~$n$.
4.447 +
4.448 +\item If the domain of some $R_i$ is the Cartesian product
4.449 +$A_1\times\cdots\times A_m$, then the corresponding predicate $P_i$ takes $m$
4.450 +arguments and the corresponding conjunct of the conclusion is
4.451 +\[ (\forall z_1\ldots z_m.\pair{z_1,\ldots,z_m}\in R_i\imp P_i(z_1,\ldots,z_m))
4.452 +\]
4.453 +\end{itemize}
4.454 +The last point above simplifies reasoning about inductively defined
4.455 +relations. It eliminates the need to express properties of $z_1$,
4.456 +\ldots,~$z_m$ as properties of the tuple $\pair{z_1,\ldots,z_m}$.
4.457 +
4.458 +\subsection{Co-induction}\label{co-ind-sec}
4.459 +A co-inductive definition yields a primitive co-induction rule, with no
4.460 +refinements such as those for the induction rules. (Experience may suggest
4.461 +refinements later.) Consider the co-datatype of lazy lists as an example. For
4.462 +suitable definitions of $\LNil$ and $\LCons$, lazy lists may be defined as the
4.463 +greatest fixedpoint satisfying the rules
4.464 +\[ \LNil\in\llist(A) \qquad
4.465 + \infer[(-)]{\LCons(a,l)\in\llist(A)}{a\in A & l\in\llist(A)}
4.466 +\]
4.467 +The $(-)$ tag stresses that this is a co-inductive definition. A suitable
4.468 +domain for $\llist(A)$ is $\quniv(A)$, a set closed under variant forms of
4.469 +sum and product for representing infinite data structures
4.470 +(\S\ref{data-sec}). Co-inductive definitions use these variant sums and
4.471 +products.
4.472 +
4.473 +The package derives an {\tt unfold} theorem similar to that for $\Fin(A)$.
4.474 +Then it proves the theorem {\tt co\_induct}, which expresses that $\llist(A)$
4.475 +is the greatest solution to this equation contained in $\quniv(A)$:
4.476 +\[ \infer{a\in\llist(A)}{a\in X & X\sbs \quniv(A) &
4.477 + \infer*{z=\LNil\disj \bigl(\exists a\,l.\,
4.478 + \begin{array}[t]{@{}l}
4.479 + z=\LCons(a,l) \conj a\in A \conj{}\\
4.480 + l\in X\un\llist(A) \bigr)
4.481 + \end{array} }{[z\in X]_z}}
4.482 +\]
4.483 +Having $X\un\llist(A)$ instead of simply $X$ in the third premise above
4.484 +represents a slight strengthening of the greatest fixedpoint property. I
4.485 +discuss several forms of co-induction rules elsewhere~\cite{paulson-coind}.
4.486 +
4.487 +
4.488 +\section{Examples of inductive and co-inductive definitions}\label{ind-eg-sec}
4.489 +This section presents several examples: the finite set operator,
4.490 +lists of $n$ elements, bisimulations on lazy lists, the well-founded part
4.491 +of a relation, and the primitive recursive functions.
4.492 +
4.493 +\subsection{The finite set operator}
4.494 +The definition of finite sets has been discussed extensively above. Here
4.495 +is the corresponding ML invocation (note that $\cons(a,b)$ abbreviates
4.496 +$\{a\}\un b$ in Isabelle/ZF):
4.497 +\begin{ttbox}
4.498 +structure Fin = Inductive_Fun
4.499 + (val thy = Arith.thy addconsts [(["Fin"],"i=>i")];
4.500 + val rec_doms = [("Fin","Pow(A)")];
4.501 + val sintrs =
4.502 + ["0 : Fin(A)",
4.503 + "[| a: A; b: Fin(A) |] ==> cons(a,b) : Fin(A)"];
4.504 + val monos = [];
4.505 + val con_defs = [];
4.506 + val type_intrs = [empty_subsetI, cons_subsetI, PowI]
4.507 + val type_elims = [make_elim PowD]);
4.508 +\end{ttbox}
4.509 +The parent theory is obtained from {\tt Arith.thy} by adding the unary
4.510 +function symbol~$\Fin$. Its domain is specified as $\pow(A)$, where $A$ is
4.511 +the parameter appearing in the introduction rules. For type-checking, the
4.512 +package supplies the introduction rules:
4.513 +\[ \emptyset\sbs A \qquad
4.514 + \infer{\{a\}\un B\sbs C}{a\in C & B\sbs C}
4.515 +\]
4.516 +A further introduction rule and an elimination rule express the two
4.517 +directions of the equivalence $A\in\pow(B)\bimp A\sbs B$. Type-checking
4.518 +involves mostly introduction rules. When the package returns, we can refer
4.519 +to the $\Fin(A)$ introduction rules as {\tt Fin.intrs}, the induction rule
4.520 +as {\tt Fin.induct}, and so forth.
4.521 +
4.522 +\subsection{Lists of $n$ elements}\label{listn-sec}
4.523 +This has become a standard example in the
4.524 +literature. Following Paulin-Mohring~\cite{paulin92}, we could attempt to
4.525 +define a new datatype $\listn(A,n)$, for lists of length~$n$, as an $n$-indexed
4.526 +family of sets. But her introduction rules
4.527 +\[ {\tt Niln}\in\listn(A,0) \qquad
4.528 + \infer{{\tt Consn}(n,a,l)\in\listn(A,\succ(n))}
4.529 + {n\in\nat & a\in A & l\in\listn(A,n)}
4.530 +\]
4.531 +are not acceptable to the inductive definition package:
4.532 +$\listn$ occurs with three different parameter lists in the definition.
4.533 +
4.534 +\begin{figure}
4.535 +\begin{small}
4.536 +\begin{verbatim}
4.537 +structure ListN = Inductive_Fun
4.538 + (val thy = ListFn.thy addconsts [(["listn"],"i=>i")];
4.539 + val rec_doms = [("listn", "nat*list(A)")];
4.540 + val sintrs =
4.541 + ["<0,Nil> : listn(A)",
4.542 + "[| a: A; <n,l> : listn(A) |] ==> <succ(n), Cons(a,l)> : listn(A)"];
4.543 + val monos = [];
4.544 + val con_defs = [];
4.545 + val type_intrs = nat_typechecks@List.intrs@[SigmaI]
4.546 + val type_elims = [SigmaE2]);
4.547 +\end{verbatim}
4.548 +\end{small}
4.549 +\hrule
4.550 +\caption{Defining lists of $n$ elements} \label{listn-fig}
4.551 +\end{figure}
4.552 +
4.553 +There is an obvious way of handling this particular example, which may suggest
4.554 +a general approach to varying parameters. Here, we can take advantage of an
4.555 +existing datatype definition of $\lst(A)$, with constructors $\Nil$
4.556 +and~$\Cons$. Then incorporate the number~$n$ into the inductive set itself,
4.557 +defining $\listn(A)$ as a relation. It consists of pairs $\pair{n,l}$ such
4.558 +that $n\in\nat$ and~$l\in\lst(A)$ and $l$ has length~$n$. In fact,
4.559 +$\listn(A)$ turns out to be the converse of the length function on~$\lst(A)$.
4.560 +The Isabelle/ZF introduction rules are
4.561 +\[ \pair{0,\Nil}\in\listn(A) \qquad
4.562 + \infer{\pair{\succ(n),\Cons(a,l)}\in\listn(A)}
4.563 + {a\in A & \pair{n,l}\in\listn(A)}
4.564 +\]
4.565 +Figure~\ref{listn-fig} presents the ML invocation. A theory of lists,
4.566 +extended with a declaration of $\listn$, is the parent theory. The domain
4.567 +is specified as $\nat\times\lst(A)$. The type-checking rules include those
4.568 +for 0, $\succ$, $\Nil$ and $\Cons$. Because $\listn(A)$ is a set of pairs,
4.569 +type-checking also requires introduction and elimination rules to express
4.570 +both directions of the equivalence $\pair{a,b}\in A\times B \bimp a\in A
4.571 +\conj b\in B$.
4.572 +
4.573 +The package returns introduction, elimination and induction rules for
4.574 +$\listn$. The basic induction rule, {\tt ListN.induct}, is
4.575 +\[ \infer{P(x)}{x\in\listn(A) & P(\pair{0,\Nil}) &
4.576 + \infer*{P(\pair{\succ(n),\Cons(a,l)})}
4.577 + {[a\in A & \pair{n,l}\in\listn(A) & P(\pair{n,l})]_{a,l,n}}}
4.578 +\]
4.579 +This rule requires the induction formula to be a
4.580 +unary property of pairs,~$P(\pair{n,l})$. The alternative rule, {\tt
4.581 +ListN.mutual\_induct}, uses a binary property instead:
4.582 +\[ \infer{\forall n\,l. \pair{n,l}\in\listn(A) \imp P(\pair{n,l})}
4.583 + {P(0,\Nil) &
4.584 + \infer*{P(\succ(n),\Cons(a,l))}
4.585 + {[a\in A & \pair{n,l}\in\listn(A) & P(n,l)]_{a,l,n}}}
4.586 +\]
4.587 +It is now a simple matter to prove theorems about $\listn(A)$, such as
4.588 +\[ \forall l\in\lst(A). \pair{\length(l),\, l}\in\listn(A) \]
4.589 +\[ \listn(A)``\{n\} = \{l\in\lst(A). \length(l)=n\} \]
4.590 +This latter result --- here $r``A$ denotes the image of $A$ under $r$
4.591 +--- asserts that the inductive definition agrees with the obvious notion of
4.592 +$n$-element list.
4.593 +
4.594 +Unlike in Coq, the definition does not declare a new datatype. A `list of
4.595 +$n$ elements' really is a list, and is subject to list operators such
4.596 +as append. For example, a trivial induction yields
4.597 +\[ \infer{\pair{m\mathbin{+} m,\, l@l'}\in\listn(A)}
4.598 + {\pair{m,l}\in\listn(A) & \pair{m',l'}\in\listn(A)}
4.599 +\]
4.600 +where $+$ here denotes addition on the natural numbers and @ denotes append.
4.601 +
4.602 +\ifCADE\typeout{****Omitting mk_cases from CADE version!}
4.603 +\else
4.604 +\subsection{A demonstration of {\tt mk\_cases}}
4.605 +The elimination rule, {\tt ListN.elim}, is cumbersome:
4.606 +\[ \infer{Q}{x\in\listn(A) &
4.607 + \infer*{Q}{[x = \pair{0,\Nil}]} &
4.608 + \infer*{Q}
4.609 + {\left[\begin{array}{l}
4.610 + x = \pair{\succ(n),\Cons(a,l)} \\
4.611 + a\in A \\
4.612 + \pair{n,l}\in\listn(A)
4.613 + \end{array} \right]_{a,l,n}}}
4.614 +\]
4.615 +The function {\tt ListN.mk\_cases} generates simplified instances of this
4.616 +rule. It works by freeness reasoning on the list constructors.
4.617 +If $x$ is $\pair{i,\Nil}$ or $\pair{i,\Cons(a,l)}$ then {\tt ListN.mk\_cases}
4.618 +deduces the corresponding form of~$i$. For example,
4.619 +\begin{ttbox}
4.620 +ListN.mk_cases List.con_defs "<i,Cons(a,l)> : listn(A)"
4.621 +\end{ttbox}
4.622 +yields the rule
4.623 +\[ \infer{Q}{\pair{i, \Cons(a,l)}\in\listn(A) &
4.624 + \infer*{Q}
4.625 + {\left[\begin{array}{l}
4.626 + i = \succ(n) \\ a\in A \\ \pair{n,l}\in\listn(A)
4.627 + \end{array} \right]_{n}}}
4.628 +\]
4.629 +The package also has built-in rules for freeness reasoning about $0$
4.630 +and~$\succ$. So if $x$ is $\pair{0,l}$ or $\pair{\succ(i),l}$, then {\tt
4.631 +ListN.mk\_cases} can similarly deduce the corresponding form of~$l$.
4.632 +
4.633 +The function {\tt mk\_cases} is also useful with datatype definitions
4.634 +themselves. The version from the definition of lists, namely {\tt
4.635 +List.mk\_cases}, can prove the rule
4.636 +\[ \infer{Q}{\Cons(a,l)\in\lst(A) &
4.637 + & \infer*{Q}{[a\in A &l\in\lst(A)]} }
4.638 +\]
4.639 +The most important uses of {\tt mk\_cases} concern inductive definitions of
4.640 +evaluation relations. Then {\tt mk\_cases} supports the kind of backward
4.641 +inference typical of hand proofs, for example to prove that the evaluation
4.642 +relation is functional.
4.643 +\fi %CADE
4.644 +
4.645 +\subsection{A co-inductive definition: bisimulations on lazy lists}
4.646 +This example anticipates the definition of the co-datatype $\llist(A)$, which
4.647 +consists of lazy lists over~$A$. Its constructors are $\LNil$ and $\LCons$,
4.648 +satisfying the introduction rules shown in~\S\ref{co-ind-sec}.
4.649 +Because $\llist(A)$ is defined as a greatest fixedpoint and uses the variant
4.650 +pairing and injection operators, it contains non-well-founded elements such as
4.651 +solutions to $\LCons(a,l)=l$.
4.652 +
4.653 +The next step in the development of lazy lists is to define a co-induction
4.654 +principle for proving equalities. This is done by showing that the equality
4.655 +relation on lazy lists is the greatest fixedpoint of some monotonic
4.656 +operation. The usual approach~\cite{pitts94} is to define some notion of
4.657 +bisimulation for lazy lists, define equivalence to be the greatest
4.658 +bisimulation, and finally to prove that two lazy lists are equivalent if and
4.659 +only if they are equal. The co-induction rule for equivalence then yields a
4.660 +co-induction principle for equalities.
4.661 +
4.662 +A binary relation $R$ on lazy lists is a {\bf bisimulation} provided $R\sbs
4.663 +R^+$, where $R^+$ is the relation
4.664 +\[ \{\pair{\LNil;\LNil}\} \un
4.665 + \{\pair{\LCons(a,l);\LCons(a,l')} . a\in A \conj \pair{l;l'}\in R\}.
4.666 +\]
4.667 +Variant pairs are used, $\pair{l;l'}$ instead of $\pair{l,l'}$, because this
4.668 +is a co-inductive definition.
4.669 +
4.670 +A pair of lazy lists are {\bf equivalent} if they belong to some bisimulation.
4.671 +Equivalence can be co-inductively defined as the greatest fixedpoint for the
4.672 +introduction rules
4.673 +\[ \pair{\LNil;\LNil} \in\lleq(A) \qquad
4.674 + \infer[(-)]{\pair{\LCons(a,l);\LCons(a,l')} \in\lleq(A)}
4.675 + {a\in A & \pair{l;l'}\in \lleq(A)}
4.676 +\]
4.677 +To make this co-inductive definition, we invoke \verb|Co_Inductive_Fun|:
4.678 +\begin{ttbox}
4.679 +structure LList_Eq = Co_Inductive_Fun
4.680 +(val thy = LList.thy addconsts [(["lleq"],"i=>i")];
4.681 + val rec_doms = [("lleq", "llist(A) <*> llist(A)")];
4.682 + val sintrs =
4.683 + ["<LNil; LNil> : lleq(A)",
4.684 + "[| a:A; <l;l'>: lleq(A) |] ==> <LCons(a,l); LCons(a,l')>: lleq(A)"];
4.685 + val monos = [];
4.686 + val con_defs = [];
4.687 + val type_intrs = LList.intrs@[QSigmaI];
4.688 + val type_elims = [QSigmaE2]);
4.689 +\end{ttbox}
4.690 +Again, {\tt addconsts} declares a constant for $\lleq$ in the parent theory.
4.691 +The domain of $\lleq(A)$ is $\llist(A)\otimes\llist(A)$, where $\otimes$
4.692 +denotes the variant Cartesian product. The type-checking rules include the
4.693 +introduction rules for lazy lists as well as rules expressinve both
4.694 +definitions of the equivalence
4.695 +\[ \pair{a;b}\in A\otimes B \bimp a\in A \conj b\in B. \]
4.696 +
4.697 +The package returns the introduction rules and the elimination rule, as
4.698 +usual. But instead of induction rules, it returns a co-induction rule.
4.699 +The rule is too big to display in the usual notation; its conclusion is
4.700 +$a\in\lleq(A)$ and its premises are $a\in X$, $X\sbs \llist(A)\otimes\llist(A)$
4.701 +and
4.702 +\[ \infer*{z=\pair{\LNil;\LNil}\disj \bigl(\exists a\,l\,l'.\,
4.703 + \begin{array}[t]{@{}l}
4.704 + z=\pair{\LCons(a,l);\LCons(a,l')} \conj a\in A \conj{}\\
4.705 + \pair{l;l'}\in X\un\lleq(A) \bigr)
4.706 + \end{array} }{[z\in X]_z}
4.707 +\]
4.708 +Thus if $a\in X$, where $X$ is a bisimulation contained in the
4.709 +domain of $\lleq(A)$, then $a\in\lleq(A)$. It is easy to show that
4.710 +$\lleq(A)$ is reflexive: the equality relation is a bisimulation. And
4.711 +$\lleq(A)$ is symmetric: its converse is a bisimulation. But showing that
4.712 +$\lleq(A)$ coincides with the equality relation takes considerable work.
4.713 +
4.714 +\subsection{The accessible part of a relation}\label{acc-sec}
4.715 +Let $\prec$ be a binary relation on~$D$; in short, $(\prec)\sbs D\times D$.
4.716 +The {\bf accessible} or {\bf well-founded} part of~$\prec$, written
4.717 +$\acc(\prec)$, is essentially that subset of~$D$ for which $\prec$ admits
4.718 +no infinite decreasing chains~\cite{aczel77}. Formally, $\acc(\prec)$ is
4.719 +inductively defined to be the least set that contains $a$ if it contains
4.720 +all $\prec$-predecessors of~$a$, for $a\in D$. Thus we need an
4.721 +introduction rule of the form
4.722 +%%%%\[ \infer{a\in\acc(\prec)}{\infer*{y\in\acc(\prec)}{[y\prec a]_y}} \]
4.723 +\[ \infer{a\in\acc(\prec)}{\forall y.y\prec a\imp y\in\acc(\prec)} \]
4.724 +Paulin-Mohring treats this example in Coq~\cite{paulin92}, but it causes
4.725 +difficulties for other systems. Its premise does not conform to
4.726 +the structure of introduction rules for HOL's inductive definition
4.727 +package~\cite{camilleri92}. It is also unacceptable to Isabelle package
4.728 +(\S\ref{intro-sec}), but fortunately can be transformed into one of the
4.729 +form $t\in M(R)$.
4.730 +
4.731 +The powerset operator is monotonic, and $t\in\pow(R)$ is equivalent to
4.732 +$t\sbs R$. This in turn is equivalent to $\forall y\in t. y\in R$. To
4.733 +express $\forall y.y\prec a\imp y\in\acc(\prec)$ we need only find a
4.734 +term~$t$ such that $y\in t$ if and only if $y\prec a$. A suitable $t$ is
4.735 +the inverse image of~$\{a\}$ under~$\prec$.
4.736 +
4.737 +The ML invocation below follows this approach. Here $r$ is~$\prec$ and
4.738 +$\field(r)$ refers to~$D$, the domain of $\acc(r)$. Finally $r^{-}``\{a\}$
4.739 +denotes the inverse image of~$\{a\}$ under~$r$. The package is supplied
4.740 +the theorem {\tt Pow\_mono}, which asserts that $\pow$ is monotonic.
4.741 +\begin{ttbox}
4.742 +structure Acc = Inductive_Fun
4.743 + (val thy = WF.thy addconsts [(["acc"],"i=>i")];
4.744 + val rec_doms = [("acc", "field(r)")];
4.745 + val sintrs =
4.746 + ["[| r-``\{a\} : Pow(acc(r)); a : field(r) |] ==> a : acc(r)"];
4.747 + val monos = [Pow_mono];
4.748 + val con_defs = [];
4.749 + val type_intrs = [];
4.750 + val type_elims = []);
4.751 +\end{ttbox}
4.752 +The Isabelle theory proceeds to prove facts about $\acc(\prec)$. For
4.753 +instance, $\prec$ is well-founded if and only if its field is contained in
4.754 +$\acc(\prec)$.
4.755 +
4.756 +As mentioned in~\S\ref{basic-ind-sec}, a premise of the form $t\in M(R)$
4.757 +gives rise to an unusual induction hypothesis. Let us examine the
4.758 +induction rule, {\tt Acc.induct}:
4.759 +\[ \infer{P(x)}{x\in\acc(r) &
4.760 + \infer*{P(a)}{[r^{-}``\{a\}\in\pow(\{z\in\acc(r).P(z)\}) &
4.761 + a\in\field(r)]_a}}
4.762 +\]
4.763 +The strange induction hypothesis is equivalent to
4.764 +$\forall y. \pair{y,a}\in r\imp y\in\acc(r)\conj P(y)$.
4.765 +Therefore the rule expresses well-founded induction on the accessible part
4.766 +of~$\prec$.
4.767 +
4.768 +The use of inverse image is not essential. The Isabelle package can accept
4.769 +introduction rules with arbitrary premises of the form $\forall
4.770 +\vec{y}.P(\vec{y})\imp f(\vec{y})\in R$. The premise can be expressed
4.771 +equivalently as
4.772 +\[ \{z\in D. P(\vec{y}) \conj z=f(\vec{y})\} \]
4.773 +provided $f(\vec{y})\in D$ for all $\vec{y}$ such that~$P(\vec{y})$. The
4.774 +following section demonstrates another use of the premise $t\in M(R)$,
4.775 +where $M=\lst$.
4.776 +
4.777 +\subsection{The primitive recursive functions}\label{primrec-sec}
4.778 +The primitive recursive functions are traditionally defined inductively, as
4.779 +a subset of the functions over the natural numbers. One difficulty is that
4.780 +functions of all arities are taken together, but this is easily
4.781 +circumvented by regarding them as functions on lists. Another difficulty,
4.782 +the notion of composition, is less easily circumvented.
4.783 +
4.784 +Here is a more precise definition. Letting $\vec{x}$ abbreviate
4.785 +$x_0,\ldots,x_{n-1}$, we can write lists such as $[\vec{x}]$,
4.786 +$[y+1,\vec{x}]$, etc. A function is {\bf primitive recursive} if it
4.787 +belongs to the least set of functions in $\lst(\nat)\to\nat$ containing
4.788 +\begin{itemize}
4.789 +\item The {\bf successor} function $\SC$, such that $\SC[y,\vec{x}]=y+1$.
4.790 +\item All {\bf constant} functions $\CONST(k)$, such that
4.791 + $\CONST(k)[\vec{x}]=k$.
4.792 +\item All {\bf projection} functions $\PROJ(i)$, such that
4.793 + $\PROJ(i)[\vec{x}]=x_i$ if $0\leq i<n$.
4.794 +\item All {\bf compositions} $\COMP(g,[f_0,\ldots,f_{m-1}])$,
4.795 +where $g$ and $f_0$, \ldots, $f_{m-1}$ are primitive recursive,
4.796 +such that
4.797 +\begin{eqnarray*}
4.798 + \COMP(g,[f_0,\ldots,f_{m-1}])[\vec{x}] & = &
4.799 + g[f_0[\vec{x}],\ldots,f_{m-1}[\vec{x}]].
4.800 +\end{eqnarray*}
4.801 +
4.802 +\item All {\bf recursions} $\PREC(f,g)$, where $f$ and $g$ are primitive
4.803 + recursive, such that
4.804 +\begin{eqnarray*}
4.805 + \PREC(f,g)[0,\vec{x}] & = & f[\vec{x}] \\
4.806 + \PREC(f,g)[y+1,\vec{x}] & = & g[\PREC(f,g)[y,\vec{x}],\, y,\, \vec{x}].
4.807 +\end{eqnarray*}
4.808 +\end{itemize}
4.809 +Composition is awkward because it combines not two functions, as is usual,
4.810 +but $m+1$ functions. In her proof that Ackermann's function is not
4.811 +primitive recursive, Nora Szasz was unable to formalize this definition
4.812 +directly~\cite{szasz93}. So she generalized primitive recursion to
4.813 +tuple-valued functions. This modified the inductive definition such that
4.814 +each operation on primitive recursive functions combined just two functions.
4.815 +
4.816 +\begin{figure}
4.817 +\begin{ttbox}
4.818 +structure Primrec = Inductive_Fun
4.819 + (val thy = Primrec0.thy;
4.820 + val rec_doms = [("primrec", "list(nat)->nat")];
4.821 + val ext = None
4.822 + val sintrs =
4.823 + ["SC : primrec",
4.824 + "k: nat ==> CONST(k) : primrec",
4.825 + "i: nat ==> PROJ(i) : primrec",
4.826 + "[| g: primrec; fs: list(primrec) |] ==> COMP(g,fs): primrec",
4.827 + "[| f: primrec; g: primrec |] ==> PREC(f,g): primrec"];
4.828 + val monos = [list_mono];
4.829 + val con_defs = [SC_def,CONST_def,PROJ_def,COMP_def,PREC_def];
4.830 + val type_intrs = pr0_typechecks
4.831 + val type_elims = []);
4.832 +\end{ttbox}
4.833 +\hrule
4.834 +\caption{Inductive definition of the primitive recursive functions}
4.835 +\label{primrec-fig}
4.836 +\end{figure}
4.837 +\def\fs{{\it fs}}
4.838 +Szasz was using ALF, but Coq and HOL would also have problems accepting
4.839 +this definition. Isabelle's package accepts it easily since
4.840 +$[f_0,\ldots,f_{m-1}]$ is a list of primitive recursive functions and
4.841 +$\lst$ is monotonic. There are five introduction rules, one for each of
4.842 +the five forms of primitive recursive function. Note the one for $\COMP$:
4.843 +\[ \infer{\COMP(g,\fs)\in\primrec}{g\in\primrec & \fs\in\lst(\primrec)} \]
4.844 +The induction rule for $\primrec$ has one case for each introduction rule.
4.845 +Due to the use of $\lst$ as a monotone operator, the composition case has
4.846 +an unusual induction hypothesis:
4.847 + \[ \infer*{P(\COMP(g,\fs))}
4.848 + {[g\in\primrec & \fs\in\lst(\{z\in\primrec.P(x)\})]_{\fs,g}} \]
4.849 +The hypothesis states that $\fs$ is a list of primitive recursive functions
4.850 +satisfying the induction formula. Proving the $\COMP$ case typically requires
4.851 +structural induction on lists, yielding two subcases: either $\fs=\Nil$ or
4.852 +else $\fs=\Cons(f,\fs')$, where $f\in\primrec$, $P(f)$, and $\fs'$ is
4.853 +another list of primitive recursive functions satisfying~$P$.
4.854 +
4.855 +Figure~\ref{primrec-fig} presents the ML invocation. Theory {\tt
4.856 + Primrec0.thy} defines the constants $\SC$, etc.; their definitions
4.857 +consist of routine list programming and are omitted. The Isabelle theory
4.858 +goes on to formalize Ackermann's function and prove that it is not
4.859 +primitive recursive, using the induction rule {\tt Primrec.induct}. The
4.860 +proof follows Szasz's excellent account.
4.861 +
4.862 +ALF and Coq treat inductive definitions as datatypes, with a new
4.863 +constructor for each introduction rule. This forced Szasz to define a
4.864 +small programming language for the primitive recursive functions, and then
4.865 +define their semantics. But the Isabelle/ZF formulation defines the
4.866 +primitive recursive functions directly as a subset of the function set
4.867 +$\lst(\nat)\to\nat$. This saves a step and conforms better to mathematical
4.868 +tradition.
4.869 +
4.870 +
4.871 +\section{Datatypes and co-datatypes}\label{data-sec}
4.872 +A (co-)datatype definition is a (co-)inductive definition with automatically
4.873 +defined constructors and case analysis operator. The package proves that the
4.874 +case operator inverts the constructors, and can also prove freeness theorems
4.875 +involving any pair of constructors.
4.876 +
4.877 +
4.878 +\subsection{Constructors and their domain}
4.879 +Conceptually, our two forms of definition are distinct: a (co-)inductive
4.880 +definition selects a subset of an existing set, but a (co-)datatype
4.881 +definition creates a new set. But the package reduces the latter to the
4.882 +former. A set having strong closure properties must serve as the domain
4.883 +of the (co-)inductive definition. Constructing this set requires some
4.884 +theoretical effort. Consider first datatypes and then co-datatypes.
4.885 +
4.886 +Isabelle/ZF defines the standard notion of Cartesian product $A\times B$,
4.887 +containing ordered pairs $\pair{a,b}$. Now the $m$-tuple
4.888 +$\pair{x_1,\ldots\,x_m}$ is the empty set~$\emptyset$ if $m=0$, simply
4.889 +$x_1$ if $m=1$, and $\pair{x_1,\pair{x_2,\ldots\,x_m}}$ if $m\geq2$.
4.890 +Isabelle/ZF also defines the disjoint sum $A+B$, containing injections
4.891 +$\Inl(a)\equiv\pair{0,a}$ and $\Inr(b)\equiv\pair{1,b}$.
4.892 +
4.893 +A datatype constructor $\Con(x_1,\ldots\,x_m)$ is defined to be
4.894 +$h(\pair{x_1,\ldots\,x_m})$, where $h$ is composed of $\Inl$ and~$\Inr$.
4.895 +In a mutually recursive definition, all constructors for the set~$R_i$ have
4.896 +the outer form~$h_{i,n}$, where $h_{i,n}$ is the injection described
4.897 +in~\S\ref{mutual-sec}. Further nested injections ensure that the
4.898 +constructors for~$R_i$ are pairwise distinct.
4.899 +
4.900 +Isabelle/ZF defines the set $\univ(A)$, which contains~$A$ and
4.901 +furthermore contains $\pair{a,b}$, $\Inl(a)$ and $\Inr(b)$ for $a$,
4.902 +$b\in\univ(A)$. In a typical datatype definition with set parameters
4.903 +$A_1$, \ldots, $A_k$, a suitable domain for all the recursive sets is
4.904 +$\univ(A_1\un\cdots\un A_k)$. This solves the problem for
4.905 +datatypes~\cite[\S4.2]{paulson-set-II}.
4.906 +
4.907 +The standard pairs and injections can only yield well-founded
4.908 +constructions. This eases the (manual!) definition of recursive functions
4.909 +over datatypes. But they are unsuitable for co-datatypes, which typically
4.910 +contain non-well-founded objects.
4.911 +
4.912 +To support co-datatypes, Isabelle/ZF defines a variant notion of ordered
4.913 +pair, written~$\pair{a;b}$. It also defines the corresponding variant
4.914 +notion of Cartesian product $A\otimes B$, variant injections $\QInl(a)$
4.915 +and~$\QInr(b)$, and variant disjoint sum $A\oplus B$. Finally it defines
4.916 +the set $\quniv(A)$, which contains~$A$ and furthermore contains
4.917 +$\pair{a;b}$, $\QInl(a)$ and $\QInr(b)$ for $a$, $b\in\quniv(A)$. In a
4.918 +typical co-datatype definition with set parameters $A_1$, \ldots, $A_k$, a
4.919 +suitable domain is $\quniv(A_1\un\cdots\un A_k)$. This approach is an
4.920 +alternative to adopting an Anti-Foundation
4.921 +Axiom~\cite{aczel88}.\footnote{No reference is available. Variant pairs
4.922 + are defined by $\pair{a;b}\equiv a+b \equiv (\{0\}\times a) \un (\{1\}\times
4.923 + b)$, where $\times$ is the Cartesian product for standard ordered pairs. Now
4.924 + $\pair{a;b}$ is injective and monotonic in its two arguments.
4.925 + Non-well-founded constructions, such as infinite lists, are constructed
4.926 + as least fixedpoints; the bounding set typically has the form
4.927 + $\univ(a_1\un\cdots\un a_k)$, where $a_1$, \ldots, $a_k$ are specified
4.928 + elements of the construction.}
4.929 +
4.930 +
4.931 +\subsection{The case analysis operator}
4.932 +The (co-)datatype package automatically defines a case analysis operator,
4.933 +called {\tt$R$\_case}. A mutually recursive definition still has only
4.934 +one operator, called {\tt$R_1$\_\ldots\_$R_n$\_case}. The case operator is
4.935 +analogous to those for products and sums.
4.936 +
4.937 +Datatype definitions employ standard products and sums, whose operators are
4.938 +$\split$ and $\case$ and satisfy the equations
4.939 +\begin{eqnarray*}
4.940 + \split(f,\pair{x,y}) & = & f(x,y) \\
4.941 + \case(f,g,\Inl(x)) & = & f(x) \\
4.942 + \case(f,g,\Inr(y)) & = & g(y)
4.943 +\end{eqnarray*}
4.944 +Suppose the datatype has $k$ constructors $\Con_1$, \ldots,~$\Con_k$. Then
4.945 +its case operator takes $k+1$ arguments and satisfies an equation for each
4.946 +constructor:
4.947 +\begin{eqnarray*}
4.948 + R\hbox{\_case}(f_1,\ldots,f_k, {\tt Con}_i(\vec{x})) & = & f_i(\vec{x}),
4.949 + \qquad i = 1, \ldots, k
4.950 +\end{eqnarray*}
4.951 +Note that if $f$ and $g$ have meta-type $i\To i$ then so do $\split(f)$ and
4.952 +$\case(f,g)$. This works because $\split$ and $\case$ operate on their
4.953 +last argument. They are easily combined to make complex case analysis
4.954 +operators. Here are two examples:
4.955 +\begin{itemize}
4.956 +\item $\split(\lambda x.\split(f(x)))$ performs case analysis for
4.957 +$A\times (B\times C)$, as is easily verified:
4.958 +\begin{eqnarray*}
4.959 + \split(\lambda x.\split(f(x)), \pair{a,b,c})
4.960 + & = & (\lambda x.\split(f(x))(a,\pair{b,c}) \\
4.961 + & = & \split(f(a), \pair{b,c}) \\
4.962 + & = & f(a,b,c)
4.963 +\end{eqnarray*}
4.964 +
4.965 +\item $\case(f,\case(g,h))$ performs case analysis for $A+(B+C)$; let us
4.966 +verify one of the three equations:
4.967 +\begin{eqnarray*}
4.968 + \case(f,\case(g,h), \Inr(\Inl(b)))
4.969 + & = & \case(g,h,\Inl(b)) \\
4.970 + & = & g(b)
4.971 +\end{eqnarray*}
4.972 +\end{itemize}
4.973 +Co-datatype definitions are treated in precisely the same way. They express
4.974 +case operators using those for the variant products and sums, namely
4.975 +$\qsplit$ and~$\qcase$.
4.976 +
4.977 +
4.978 +\ifCADE The package has processed all the datatypes discussed in my earlier
4.979 +paper~\cite{paulson-set-II} and the co-datatype of lazy lists. Space
4.980 +limitations preclude discussing these examples here, but they are
4.981 +distributed with Isabelle.
4.982 +\typeout{****Omitting datatype examples from CADE version!} \else
4.983 +
4.984 +To see how constructors and the case analysis operator are defined, let us
4.985 +examine some examples. These include lists and trees/forests, which I have
4.986 +discussed extensively in another paper~\cite{paulson-set-II}.
4.987 +
4.988 +\begin{figure}
4.989 +\begin{ttbox}
4.990 +structure List = Datatype_Fun
4.991 + (val thy = Univ.thy;
4.992 + val rec_specs =
4.993 + [("list", "univ(A)",
4.994 + [(["Nil"], "i"),
4.995 + (["Cons"], "[i,i]=>i")])];
4.996 + val rec_styp = "i=>i";
4.997 + val ext = None
4.998 + val sintrs =
4.999 + ["Nil : list(A)",
4.1000 + "[| a: A; l: list(A) |] ==> Cons(a,l) : list(A)"];
4.1001 + val monos = [];
4.1002 + val type_intrs = datatype_intrs
4.1003 + val type_elims = datatype_elims);
4.1004 +\end{ttbox}
4.1005 +\hrule
4.1006 +\caption{Defining the datatype of lists} \label{list-fig}
4.1007 +
4.1008 +\medskip
4.1009 +\begin{ttbox}
4.1010 +structure LList = Co_Datatype_Fun
4.1011 + (val thy = QUniv.thy;
4.1012 + val rec_specs =
4.1013 + [("llist", "quniv(A)",
4.1014 + [(["LNil"], "i"),
4.1015 + (["LCons"], "[i,i]=>i")])];
4.1016 + val rec_styp = "i=>i";
4.1017 + val ext = None
4.1018 + val sintrs =
4.1019 + ["LNil : llist(A)",
4.1020 + "[| a: A; l: llist(A) |] ==> LCons(a,l) : llist(A)"];
4.1021 + val monos = [];
4.1022 + val type_intrs = co_datatype_intrs
4.1023 + val type_elims = co_datatype_elims);
4.1024 +\end{ttbox}
4.1025 +\hrule
4.1026 +\caption{Defining the co-datatype of lazy lists} \label{llist-fig}
4.1027 +\end{figure}
4.1028 +
4.1029 +\subsection{Example: lists and lazy lists}
4.1030 +Figures \ref{list-fig} and~\ref{llist-fig} present the ML definitions of
4.1031 +lists and lazy lists, respectively. They highlight the (many) similarities
4.1032 +and (few) differences between datatype and co-datatype definitions.
4.1033 +
4.1034 +Each form of list has two constructors, one for the empty list and one for
4.1035 +adding an element to a list. Each takes a parameter, defining the set of
4.1036 +lists over a given set~$A$. Each uses the appropriate domain from a
4.1037 +Isabelle/ZF theory:
4.1038 +\begin{itemize}
4.1039 +\item $\lst(A)$ specifies domain $\univ(A)$ and parent theory {\tt Univ.thy}.
4.1040 +
4.1041 +\item $\llist(A)$ specifies domain $\quniv(A)$ and parent theory {\tt
4.1042 +QUniv.thy}.
4.1043 +\end{itemize}
4.1044 +
4.1045 +Since $\lst(A)$ is a datatype, it enjoys a structural rule, {\tt List.induct}:
4.1046 +\[ \infer{P(x)}{x\in\lst(A) & P(\Nil)
4.1047 + & \infer*{P(\Cons(a,l))}{[a\in A & l\in\lst(A) & P(l)]_{a,l}} }
4.1048 +\]
4.1049 +Induction and freeness yield the law $l\not=\Cons(a,l)$. To strengthen this,
4.1050 +Isabelle/ZF defines the rank of a set and proves that the standard pairs and
4.1051 +injections have greater rank than their components. An immediate consequence,
4.1052 +which justifies structural recursion on lists \cite[\S4.3]{paulson-set-II},
4.1053 +is
4.1054 +\[ \rank(l) < \rank(\Cons(a,l)). \]
4.1055 +
4.1056 +Since $\llist(A)$ is a co-datatype, it has no induction rule. Instead it has
4.1057 +the co-induction rule shown in \S\ref{co-ind-sec}. Since variant pairs and
4.1058 +injections are monotonic and need not have greater rank than their
4.1059 +components, fixedpoint operators can create cyclic constructions. For
4.1060 +example, the definition
4.1061 +\begin{eqnarray*}
4.1062 + \lconst(a) & \equiv & \lfp(\univ(a), \lambda l. \LCons(a,l))
4.1063 +\end{eqnarray*}
4.1064 +yields $\lconst(a) = \LCons(a,\lconst(a))$.
4.1065 +
4.1066 +\medskip
4.1067 +It may be instructive to examine the definitions of the constructors and
4.1068 +case operator for $\lst(A)$. The definitions for $\llist(A)$ are similar.
4.1069 +The list constructors are defined as follows:
4.1070 +\begin{eqnarray*}
4.1071 + \Nil & = & \Inl(\emptyset) \\
4.1072 + \Cons(a,l) & = & \Inr(\pair{a,l})
4.1073 +\end{eqnarray*}
4.1074 +The operator $\lstcase$ performs case analysis on these two alternatives:
4.1075 +\begin{eqnarray*}
4.1076 + \lstcase(c,h) & \equiv & \case(\lambda u.c, \split(h))
4.1077 +\end{eqnarray*}
4.1078 +Let us verify the two equations:
4.1079 +\begin{eqnarray*}
4.1080 + \lstcase(c, h, \Nil) & = &
4.1081 + \case(\lambda u.c, \split(h), \Inl(\emptyset)) \\
4.1082 + & = & (\lambda u.c)(\emptyset) \\
4.1083 + & = & c.\\[1ex]
4.1084 + \lstcase(c, h, \Cons(x,y)) & = &
4.1085 + \case(\lambda u.c, \split(h), \Inr(\pair{x,y})) \\
4.1086 + & = & \split(h, \pair{x,y}) \\
4.1087 + & = & h(x,y).
4.1088 +\end{eqnarray*}
4.1089 +
4.1090 +\begin{figure}
4.1091 +\begin{small}
4.1092 +\begin{verbatim}
4.1093 +structure TF = Datatype_Fun
4.1094 + (val thy = Univ.thy;
4.1095 + val rec_specs =
4.1096 + [("tree", "univ(A)",
4.1097 + [(["Tcons"], "[i,i]=>i")]),
4.1098 + ("forest", "univ(A)",
4.1099 + [(["Fnil"], "i"),
4.1100 + (["Fcons"], "[i,i]=>i")])];
4.1101 + val rec_styp = "i=>i";
4.1102 + val ext = None
4.1103 + val sintrs =
4.1104 + ["[| a:A; f: forest(A) |] ==> Tcons(a,f) : tree(A)",
4.1105 + "Fnil : forest(A)",
4.1106 + "[| t: tree(A); f: forest(A) |] ==> Fcons(t,f) : forest(A)"];
4.1107 + val monos = [];
4.1108 + val type_intrs = datatype_intrs
4.1109 + val type_elims = datatype_elims);
4.1110 +\end{verbatim}
4.1111 +\end{small}
4.1112 +\hrule
4.1113 +\caption{Defining the datatype of trees and forests} \label{tf-fig}
4.1114 +\end{figure}
4.1115 +
4.1116 +
4.1117 +\subsection{Example: mutual recursion}
4.1118 +In the mutually recursive trees/forests~\cite[\S4.5]{paulson-set-II}, trees
4.1119 +have the one constructor $\Tcons$, while forests have the two constructors
4.1120 +$\Fnil$ and~$\Fcons$. Figure~\ref{tf-fig} presents the ML
4.1121 +definition. It has much in common with that of $\lst(A)$, including its
4.1122 +use of $\univ(A)$ for the domain and {\tt Univ.thy} for the parent theory.
4.1123 +The three introduction rules define the mutual recursion. The
4.1124 +distinguishing feature of this example is its two induction rules.
4.1125 +
4.1126 +The basic induction rule is called {\tt TF.induct}:
4.1127 +\[ \infer{P(x)}{x\in\TF(A) &
4.1128 + \infer*{P(\Tcons(a,f))}
4.1129 + {\left[\begin{array}{l} a\in A \\
4.1130 + f\in\forest(A) \\ P(f)
4.1131 + \end{array}
4.1132 + \right]_{a,f}}
4.1133 + & P(\Fnil)
4.1134 + & \infer*{P(\Fcons(a,l))}
4.1135 + {\left[\begin{array}{l} t\in\tree(A) \\ P(t) \\
4.1136 + f\in\forest(A) \\ P(f)
4.1137 + \end{array}
4.1138 + \right]_{t,f}} }
4.1139 +\]
4.1140 +This rule establishes a single predicate for $\TF(A)$, the union of the
4.1141 +recursive sets.
4.1142 +
4.1143 +Although such reasoning is sometimes useful
4.1144 +\cite[\S4.5]{paulson-set-II}, a proper mutual induction rule should establish
4.1145 +separate predicates for $\tree(A)$ and $\forest(A)$. The package calls this
4.1146 +rule {\tt TF.mutual\_induct}. Observe the usage of $P$ and $Q$ in the
4.1147 +induction hypotheses:
4.1148 +\[ \infer{(\forall z. z\in\tree(A)\imp P(z)) \conj
4.1149 + (\forall z. z\in\forest(A)\imp Q(z))}
4.1150 + {\infer*{P(\Tcons(a,f))}
4.1151 + {\left[\begin{array}{l} a\in A \\
4.1152 + f\in\forest(A) \\ Q(f)
4.1153 + \end{array}
4.1154 + \right]_{a,f}}
4.1155 + & Q(\Fnil)
4.1156 + & \infer*{Q(\Fcons(a,l))}
4.1157 + {\left[\begin{array}{l} t\in\tree(A) \\ P(t) \\
4.1158 + f\in\forest(A) \\ Q(f)
4.1159 + \end{array}
4.1160 + \right]_{t,f}} }
4.1161 +\]
4.1162 +As mentioned above, the package does not define a structural recursion
4.1163 +operator. I have described elsewhere how this is done
4.1164 +\cite[\S4.5]{paulson-set-II}.
4.1165 +
4.1166 +Both forest constructors have the form $\Inr(\cdots)$,
4.1167 +while the tree constructor has the form $\Inl(\cdots)$. This pattern would
4.1168 +hold regardless of how many tree or forest constructors there were.
4.1169 +\begin{eqnarray*}
4.1170 + \Tcons(a,l) & = & \Inl(\pair{a,l}) \\
4.1171 + \Fnil & = & \Inr(\Inl(\emptyset)) \\
4.1172 + \Fcons(a,l) & = & \Inr(\Inr(\pair{a,l}))
4.1173 +\end{eqnarray*}
4.1174 +There is only one case operator; it works on the union of the trees and
4.1175 +forests:
4.1176 +\begin{eqnarray*}
4.1177 + {\tt tree\_forest\_case}(f,c,g) & \equiv &
4.1178 + \case(\split(f),\, \case(\lambda u.c, \split(g)))
4.1179 +\end{eqnarray*}
4.1180 +
4.1181 +\begin{figure}
4.1182 +\begin{small}
4.1183 +\begin{verbatim}
4.1184 +structure Data = Datatype_Fun
4.1185 + (val thy = Univ.thy;
4.1186 + val rec_specs =
4.1187 + [("data", "univ(A Un B)",
4.1188 + [(["Con0"], "i"),
4.1189 + (["Con1"], "i=>i"),
4.1190 + (["Con2"], "[i,i]=>i"),
4.1191 + (["Con3"], "[i,i,i]=>i")])];
4.1192 + val rec_styp = "[i,i]=>i";
4.1193 + val ext = None
4.1194 + val sintrs =
4.1195 + ["Con0 : data(A,B)",
4.1196 + "[| a: A |] ==> Con1(a) : data(A,B)",
4.1197 + "[| a: A; b: B |] ==> Con2(a,b) : data(A,B)",
4.1198 + "[| a: A; b: B; d: data(A,B) |] ==> Con3(a,b,d) : data(A,B)"];
4.1199 + val monos = [];
4.1200 + val type_intrs = datatype_intrs
4.1201 + val type_elims = datatype_elims);
4.1202 +\end{verbatim}
4.1203 +\end{small}
4.1204 +\hrule
4.1205 +\caption{Defining the four-constructor sample datatype} \label{data-fig}
4.1206 +\end{figure}
4.1207 +
4.1208 +\subsection{A four-constructor datatype}
4.1209 +Finally let us consider a fairly general datatype. It has four
4.1210 +constructors $\Con_0$, $\Con_1$\ $\Con_2$ and $\Con_3$, with the
4.1211 +corresponding arities. Figure~\ref{data-fig} presents the ML definition.
4.1212 +Because this datatype has two set parameters, $A$ and~$B$, it specifies
4.1213 +$\univ(A\un B)$ as its domain. The structural induction rule has four
4.1214 +minor premises, one per constructor:
4.1215 +\[ \infer{P(x)}{x\in\data(A,B) &
4.1216 + P(\Con_0) &
4.1217 + \infer*{P(\Con_1(a))}{[a\in A]_a} &
4.1218 + \infer*{P(\Con_2(a,b))}
4.1219 + {\left[\begin{array}{l} a\in A \\ b\in B \end{array}
4.1220 + \right]_{a,b}} &
4.1221 + \infer*{P(\Con_3(a,b,d))}
4.1222 + {\left[\begin{array}{l} a\in A \\ b\in B \\
4.1223 + d\in\data(A,B) \\ P(d)
4.1224 + \end{array}
4.1225 + \right]_{a,b,d}} }
4.1226 +\]
4.1227 +
4.1228 +The constructor definitions are
4.1229 +\begin{eqnarray*}
4.1230 + \Con_0 & = & \Inl(\Inl(\emptyset)) \\
4.1231 + \Con_1(a) & = & \Inl(\Inr(a)) \\
4.1232 + \Con_2(a,b) & = & \Inr(\Inl(\pair{a,b})) \\
4.1233 + \Con_3(a,b,c) & = & \Inr(\Inr(\pair{a,b,c})).
4.1234 +\end{eqnarray*}
4.1235 +The case operator is
4.1236 +\begin{eqnarray*}
4.1237 + {\tt data\_case}(f_0,f_1,f_2,f_3) & \equiv &
4.1238 + \case(\begin{array}[t]{@{}l}
4.1239 + \case(\lambda u.f_0,\; f_1),\, \\
4.1240 + \case(\split(f_2),\; \split(\lambda v.\split(f_3(v)))) )
4.1241 + \end{array}
4.1242 +\end{eqnarray*}
4.1243 +This may look cryptic, but the case equations are trivial to verify.
4.1244 +
4.1245 +In the constructor definitions, the injections are balanced. A more naive
4.1246 +approach is to define $\Con_3(a,b,c)$ as
4.1247 +$\Inr(\Inr(\Inr(\pair{a,b,c})))$; instead, each constructor has two
4.1248 +injections. The difference here is small. But the ZF examples include a
4.1249 +60-element enumeration type, where each constructor has 5 or~6 injections.
4.1250 +The naive approach would require 1 to~59 injections; the definitions would be
4.1251 +quadratic in size. It is like the difference between the binary and unary
4.1252 +numeral systems.
4.1253 +
4.1254 +The package returns the constructor and case operator definitions as the
4.1255 +theorem list \verb|con_defs|. The head of this list defines the case
4.1256 +operator and the tail defines the constructors.
4.1257 +
4.1258 +The package returns the case equations, such as
4.1259 +\begin{eqnarray*}
4.1260 + {\tt data\_case}(f_0,f_1,f_2,f_3,\Con_3(a,b,c)) & = & f_3(a,b,c),
4.1261 +\end{eqnarray*}
4.1262 +as the theorem list \verb|case_eqns|. There is one equation per constructor.
4.1263 +
4.1264 +\subsection{Proving freeness theorems}
4.1265 +There are two kinds of freeness theorems:
4.1266 +\begin{itemize}
4.1267 +\item {\bf injectiveness} theorems, such as
4.1268 +\[ \Con_2(a,b) = \Con_2(a',b') \bimp a=a' \conj b=b' \]
4.1269 +
4.1270 +\item {\bf distinctness} theorems, such as
4.1271 +\[ \Con_1(a) \not= \Con_2(a',b') \]
4.1272 +\end{itemize}
4.1273 +Since the number of such theorems is quadratic in the number of constructors,
4.1274 +the package does not attempt to prove them all. Instead it returns tools for
4.1275 +proving desired theorems --- either explicitly or `on the fly' during
4.1276 +simplification or classical reasoning.
4.1277 +
4.1278 +The theorem list \verb|free_iffs| enables the simplifier to perform freeness
4.1279 +reasoning. This works by incremental unfolding of constructors that appear in
4.1280 +equations. The theorem list contains logical equivalences such as
4.1281 +\begin{eqnarray*}
4.1282 + \Con_0=c & \bimp & c=\Inl(\Inl(\emptyset)) \\
4.1283 + \Con_1(a)=c & \bimp & c=\Inl(\Inr(a)) \\
4.1284 + & \vdots & \\
4.1285 + \Inl(a)=\Inl(b) & \bimp & a=b \\
4.1286 + \Inl(a)=\Inr(b) & \bimp & \bot \\
4.1287 + \pair{a,b} = \pair{a',b'} & \bimp & a=a' \conj b=b'
4.1288 +\end{eqnarray*}
4.1289 +For example, these rewrite $\Con_1(a)=\Con_1(b)$ to $a=b$ in four steps.
4.1290 +
4.1291 +The theorem list \verb|free_SEs| enables the classical
4.1292 +reasoner to perform similar replacements. It consists of elimination rules
4.1293 +to replace $\Con_0=c$ by $c=\Inl(\Inl(\emptyset))$, and so forth, in the
4.1294 +assumptions.
4.1295 +
4.1296 +Such incremental unfolding combines freeness reasoning with other proof
4.1297 +steps. It has the unfortunate side-effect of unfolding definitions of
4.1298 +constructors in contexts such as $\exists x.\Con_1(a)=x$, where they should
4.1299 +be left alone. Calling the Isabelle tactic {\tt fold\_tac con\_defs}
4.1300 +restores the defined constants.
4.1301 +\fi %CADE
4.1302 +
4.1303 +\section{Conclusions and future work}
4.1304 +The fixedpoint approach makes it easy to implement a uniquely powerful
4.1305 +package for inductive and co-inductive definitions. It is efficient: it
4.1306 +processes most definitions in seconds and even a 60-constructor datatype
4.1307 +requires only two minutes. It is also simple: the package consists of
4.1308 +under 1100 lines (35K bytes) of Standard ML code. The first working
4.1309 +version took under a week to code.
4.1310 +
4.1311 +The approach is not restricted to set theory. It should be suitable for
4.1312 +any logic that has some notion of set and the Knaster-Tarski Theorem.
4.1313 +Indeed, Melham's inductive definition package for the HOL
4.1314 +system~\cite{camilleri92} implicitly proves this theorem.
4.1315 +
4.1316 +Datatype and co-datatype definitions furthermore require a particular set
4.1317 +closed under a suitable notion of ordered pair. I intend to use the
4.1318 +Isabelle/ZF package as the basis for a higher-order logic one, using
4.1319 +Isabelle/HOL\@. The necessary theory is already
4.1320 +mechanizeds~\cite{paulson-coind}. HOL represents sets by unary predicates;
4.1321 +defining the corresponding types may cause complication.
4.1322 +
4.1323 +
4.1324 +\bibliographystyle{plain}
4.1325 +\bibliography{atp,theory,funprog,isabelle}
4.1326 +%%%%%\doendnotes
4.1327 +
4.1328 +\ifCADE\typeout{****Omitting appendices from CADE version!}
4.1329 +\else
4.1330 +\newpage
4.1331 +\appendix
4.1332 +\section{Inductive and co-inductive definitions: users guide}
4.1333 +The ML functors \verb|Inductive_Fun| and \verb|Co_Inductive_Fun| build
4.1334 +inductive and co-inductive definitions, respectively. This section describes
4.1335 +how to invoke them.
4.1336 +
4.1337 +\subsection{The result structure}
4.1338 +Many of the result structure's components have been discussed
4.1339 +in~\S\ref{basic-sec}; others are self-explanatory.
4.1340 +\begin{description}
4.1341 +\item[\tt thy] is the new theory containing the recursive sets.
4.1342 +
4.1343 +\item[\tt defs] is the list of definitions of the recursive sets.
4.1344 +
4.1345 +\item[\tt bnd\_mono] is a monotonicity theorem for the fixedpoint operator.
4.1346 +
4.1347 +\item[\tt unfold] is a fixedpoint equation for the recursive set (the union of
4.1348 +the recursive sets, in the case of mutual recursion).
4.1349 +
4.1350 +\item[\tt dom\_subset] is a theorem stating inclusion in the domain.
4.1351 +
4.1352 +\item[\tt intrs] is the list of introduction rules, now proved as theorems, for
4.1353 +the recursive sets.
4.1354 +
4.1355 +\item[\tt elim] is the elimination rule.
4.1356 +
4.1357 +\item[\tt mk\_cases] is a function to create simplified instances of {\tt
4.1358 +elim}, using freeness reasoning on some underlying datatype.
4.1359 +\end{description}
4.1360 +
4.1361 +For an inductive definition, the result structure contains two induction rules,
4.1362 +{\tt induct} and \verb|mutual_induct|. For a co-inductive definition, it
4.1363 +contains the rule \verb|co_induct|.
4.1364 +
4.1365 +\begin{figure}
4.1366 +\begin{ttbox}
4.1367 +sig
4.1368 +val thy : theory
4.1369 +val defs : thm list
4.1370 +val bnd_mono : thm
4.1371 +val unfold : thm
4.1372 +val dom_subset : thm
4.1373 +val intrs : thm list
4.1374 +val elim : thm
4.1375 +val mk_cases : thm list -> string -> thm
4.1376 +{\it(Inductive definitions only)}
4.1377 +val induct : thm
4.1378 +val mutual_induct: thm
4.1379 +{\it(Co-inductive definitions only)}
4.1380 +val co_induct : thm
4.1381 +end
4.1382 +\end{ttbox}
4.1383 +\hrule
4.1384 +\caption{The result of a (co-)inductive definition} \label{def-result-fig}
4.1385 +\end{figure}
4.1386 +
4.1387 +Figure~\ref{def-result-fig} summarizes the two result signatures,
4.1388 +specifying the types of all these components.
4.1389 +
4.1390 +\begin{figure}
4.1391 +\begin{ttbox}
4.1392 +sig
4.1393 +val thy : theory
4.1394 +val rec_doms : (string*string) list
4.1395 +val sintrs : string list
4.1396 +val monos : thm list
4.1397 +val con_defs : thm list
4.1398 +val type_intrs : thm list
4.1399 +val type_elims : thm list
4.1400 +end
4.1401 +\end{ttbox}
4.1402 +\hrule
4.1403 +\caption{The argument of a (co-)inductive definition} \label{def-arg-fig}
4.1404 +\end{figure}
4.1405 +
4.1406 +\subsection{The argument structure}
4.1407 +Both \verb|Inductive_Fun| and \verb|Co_Inductive_Fun| take the same argument
4.1408 +structure (Figure~\ref{def-arg-fig}). Its components are as follows:
4.1409 +\begin{description}
4.1410 +\item[\tt thy] is the definition's parent theory, which {\it must\/}
4.1411 +declare constants for the recursive sets.
4.1412 +
4.1413 +\item[\tt rec\_doms] is a list of pairs, associating the name of each recursive
4.1414 +set with its domain.
4.1415 +
4.1416 +\item[\tt sintrs] specifies the desired introduction rules as strings.
4.1417 +
4.1418 +\item[\tt monos] consists of monotonicity theorems for each operator applied
4.1419 +to a recursive set in the introduction rules.
4.1420 +
4.1421 +\item[\tt con\_defs] contains definitions of constants appearing in the
4.1422 +introduction rules. The (co-)datatype package supplies the constructors'
4.1423 +definitions here. Most direct calls of \verb|Inductive_Fun| or
4.1424 +\verb|Co_Inductive_Fun| pass the empty list; one exception is the primitive
4.1425 +recursive functions example (\S\ref{primrec-sec}).
4.1426 +
4.1427 +\item[\tt type\_intrs] consists of introduction rules for type-checking the
4.1428 + definition, as discussed in~\S\ref{basic-sec}. They are applied using
4.1429 + depth-first search; you can trace the proof by setting
4.1430 + \verb|trace_DEPTH_FIRST := true|.
4.1431 +
4.1432 +\item[\tt type\_elims] consists of elimination rules for type-checking the
4.1433 +definition. They are presumed to be `safe' and are applied as much as
4.1434 +possible, prior to the {\tt type\_intrs} search.
4.1435 +\end{description}
4.1436 +The package has a few notable restrictions:
4.1437 +\begin{itemize}
4.1438 +\item The parent theory, {\tt thy}, must declare the recursive sets as
4.1439 + constants. You can extend a theory with new constants using {\tt
4.1440 + addconsts}, as illustrated in~\S\ref{ind-eg-sec}. If the inductive
4.1441 + definition also requires new concrete syntax, then it is simpler to
4.1442 + express the parent theory using a theory file. It is often convenient to
4.1443 + define an infix syntax for relations, say $a\prec b$ for $\pair{a,b}\in
4.1444 + R$.
4.1445 +
4.1446 +\item The names of the recursive sets must be identifiers, not infix
4.1447 +operators.
4.1448 +
4.1449 +\item Side-conditions must not be conjunctions. However, an introduction rule
4.1450 +may contain any number of side-conditions.
4.1451 +\end{itemize}
4.1452 +
4.1453 +
4.1454 +\section{Datatype and co-datatype definitions: users guide}
4.1455 +The ML functors \verb|Datatype_Fun| and \verb|Co_Datatype_Fun| define datatypes
4.1456 +and co-datatypes, invoking \verb|Datatype_Fun| and
4.1457 +\verb|Co_Datatype_Fun| to make the underlying (co-)inductive definitions.
4.1458 +
4.1459 +
4.1460 +\subsection{The result structure}
4.1461 +The result structure extends that of (co-)inductive definitions
4.1462 +(Figure~\ref{def-result-fig}) with several additional items:
4.1463 +\begin{ttbox}
4.1464 +val con_thy : theory
4.1465 +val con_defs : thm list
4.1466 +val case_eqns : thm list
4.1467 +val free_iffs : thm list
4.1468 +val free_SEs : thm list
4.1469 +val mk_free : string -> thm
4.1470 +\end{ttbox}
4.1471 +Most of these have been discussed in~\S\ref{data-sec}. Here is a summary:
4.1472 +\begin{description}
4.1473 +\item[\tt con\_thy] is a new theory containing definitions of the
4.1474 +(co-)datatype's constructors and case operator. It also declares the
4.1475 +recursive sets as constants, so that it may serve as the parent
4.1476 +theory for the (co-)inductive definition.
4.1477 +
4.1478 +\item[\tt con\_defs] is a list of definitions: the case operator followed by
4.1479 +the constructors. This theorem list can be supplied to \verb|mk_cases|, for
4.1480 +example.
4.1481 +
4.1482 +\item[\tt case\_eqns] is a list of equations, stating that the case operator
4.1483 +inverts each constructor.
4.1484 +
4.1485 +\item[\tt free\_iffs] is a list of logical equivalences to perform freeness
4.1486 +reasoning by rewriting. A typical application has the form
4.1487 +\begin{ttbox}
4.1488 +by (asm_simp_tac (ZF_ss addsimps free_iffs) 1);
4.1489 +\end{ttbox}
4.1490 +
4.1491 +\item[\tt free\_SEs] is a list of `safe' elimination rules to perform freeness
4.1492 +reasoning. It can be supplied to \verb|eresolve_tac| or to the classical
4.1493 +reasoner:
4.1494 +\begin{ttbox}
4.1495 +by (fast_tac (ZF_cs addSEs free_SEs) 1);
4.1496 +\end{ttbox}
4.1497 +
4.1498 +\item[\tt mk\_free] is a function to prove freeness properties, specified as
4.1499 +strings. The theorems can be expressed in various forms, such as logical
4.1500 +equivalences or elimination rules.
4.1501 +\end{description}
4.1502 +
4.1503 +The result structure also inherits everything from the underlying
4.1504 +(co-)inductive definition, such as the introduction rules, elimination rule,
4.1505 +and induction/co-induction rule.
4.1506 +
4.1507 +
4.1508 +\begin{figure}
4.1509 +\begin{ttbox}
4.1510 +sig
4.1511 +val thy : theory
4.1512 +val rec_specs : (string * string * (string list*string)list) list
4.1513 +val rec_styp : string
4.1514 +val ext : Syntax.sext option
4.1515 +val sintrs : string list
4.1516 +val monos : thm list
4.1517 +val type_intrs: thm list
4.1518 +val type_elims: thm list
4.1519 +end
4.1520 +\end{ttbox}
4.1521 +\hrule
4.1522 +\caption{The argument of a (co-)datatype definition} \label{data-arg-fig}
4.1523 +\end{figure}
4.1524 +
4.1525 +\subsection{The argument structure}
4.1526 +Both (co-)datatype functors take the same argument structure
4.1527 +(Figure~\ref{data-arg-fig}). It does not extend that for (co-)inductive
4.1528 +definitions, but shares several components and passes them uninterpreted to
4.1529 +\verb|Datatype_Fun| or
4.1530 +\verb|Co_Datatype_Fun|. The new components are as follows:
4.1531 +\begin{description}
4.1532 +\item[\tt thy] is the (co-)datatype's parent theory. It {\it must not\/}
4.1533 +declare constants for the recursive sets. Recall that (co-)inductive
4.1534 +definitions have the opposite restriction.
4.1535 +
4.1536 +\item[\tt rec\_specs] is a list of triples of the form ({\it recursive set\/},
4.1537 +{\it domain\/}, {\it constructors\/}) for each mutually recursive set. {\it
4.1538 +Constructors\/} is a list of the form (names, type). See the discussion and
4.1539 +examples in~\S\ref{data-sec}.
4.1540 +
4.1541 +\item[\tt rec\_styp] is the common meta-type of the mutually recursive sets,
4.1542 +specified as a string. They must all have the same type because all must
4.1543 +take the same parameters.
4.1544 +
4.1545 +\item[\tt ext] is an optional syntax extension, usually omitted by writing
4.1546 +{\tt None}. You can supply mixfix syntax for the constructors by supplying
4.1547 +\begin{ttbox}
4.1548 +Some (Syntax.simple_sext [{\it mixfix declarations\/}])
4.1549 +\end{ttbox}
4.1550 +\end{description}
4.1551 +The choice of domain is usually simple. Isabelle/ZF defines the set
4.1552 +$\univ(A)$, which contains~$A$ and is closed under the standard Cartesian
4.1553 +products and disjoint sums \cite[\S4.2]{paulson-set-II}. In a typical
4.1554 +datatype definition with set parameters $A_1$, \ldots, $A_k$, a suitable
4.1555 +domain for all the recursive sets is $\univ(A_1\un\cdots\un A_k)$. For a
4.1556 +co-datatype definition, the set
4.1557 +$\quniv(A)$ contains~$A$ and is closed under the variant Cartesian products
4.1558 +and disjoint sums; the appropropriate domain is
4.1559 +$\quniv(A_1\un\cdots\un A_k)$.
4.1560 +
4.1561 +The {\tt sintrs} specify the introduction rules, which govern the recursive
4.1562 +structure of the datatype. Introduction rules may involve monotone operators
4.1563 +and side-conditions to express things that go beyond the usual notion of
4.1564 +datatype. The theorem lists {\tt monos}, {\tt type\_intrs} and {\tt
4.1565 +type\_elims} should contain precisely what is needed for the underlying
4.1566 +(co-)inductive definition. Isabelle/ZF defines theorem lists that can be
4.1567 +defined for the latter two components:
4.1568 +\begin{itemize}
4.1569 +\item {\tt datatype\_intrs} and {\tt datatype\_elims} are type-checking rules
4.1570 +for $\univ(A)$.
4.1571 +\item {\tt co\_datatype\_intrs} and {\tt co\_datatype\_elims} are type-checking
4.1572 +rules for $\quniv(A)$.
4.1573 +\end{itemize}
4.1574 +In typical definitions, these theorem lists need not be supplemented with
4.1575 +other theorems.
4.1576 +
4.1577 +The constructor definitions' right-hand sides can overlap. A
4.1578 +simple example is the datatype for the combinators, whose constructors are
4.1579 +\begin{eqnarray*}
4.1580 + {\tt K} & \equiv & \Inl(\emptyset) \\
4.1581 + {\tt S} & \equiv & \Inr(\Inl(\emptyset)) \\
4.1582 + p{\tt\#}q & \equiv & \Inr(\Inl(\pair{p,q}))
4.1583 +\end{eqnarray*}
4.1584 +Unlike in previous versions of Isabelle, \verb|fold_tac| now ensures that the
4.1585 +longest right-hand sides are folded first.
4.1586 +
4.1587 +\fi
4.1588 +\end{document}
5.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
5.2 +++ b/doc-src/sedindex Tue Nov 09 16:47:38 1993 +0100
5.3 @@ -0,0 +1,19 @@
5.4 +#! /bin/sh
5.5 +#
5.6 +#sedindex - shell script to create indexes, preprocessing LaTeX's .idx file
5.7 +#
5.8 +# puts strings prefixed by * into \tt font
5.9 +# terminator characters for strings are |!@{}
5.10 +#
5.11 +# change *"X"Y"Z"W to "X"Y"Z"W@{\tt "X"Y"Z"W}
5.12 +# change *"X"Y"Z to "X"Y"Z@{\tt "X"Y"Z}
5.13 +# change *"X"Y to "X"Y@{\tt "X"Y}
5.14 +# change *"X to "X@{\tt "X}
5.15 +# change *IDENT to IDENT@{\tt IDENT}
5.16 +# where IDENT is any string not containing | ! or @
5.17 +# FOUR backslashes: to escape the shell AND sed
5.18 +sed -e "s~\*\(\".\".\".\".\)~\1@{\\\\tt \1}~g
5.19 +s~\*\(\".\".\".\)~\1@{\\\\tt \1}~g
5.20 +s~\*\(\".\".\)~\1@{\\\\tt \1}~g
5.21 +s~\*\(\".\)~\1@{\\\\tt \1}~g
5.22 +s~\*\([^|!@{}][^|!@{}]*\)~\1@{\\\\tt \1}~g" $1.idx | makeindex -q -o $1.ind