lcp@104: %% $Id$
lcp@315: \chapter{Higher-Order Logic}
lcp@315: \index{higher-order logic|(}
lcp@315: \index{HOL system@{\sc hol} system}
lcp@315:
wenzelm@9695: The theory~\thydx{HOL} implements higher-order logic. It is based on
wenzelm@9695: Gordon's~{\sc hol} system~\cite{mgordon-hol}, which itself is based on
wenzelm@9695: Church's original paper~\cite{church40}. Andrews's book~\cite{andrews86} is a
wenzelm@9695: full description of higher-order logic. Experience with the {\sc hol} system
wenzelm@9695: has demonstrated that higher-order logic is useful for hardware verification;
wenzelm@9695: beyond this, it is widely applicable in many areas of mathematics. It is
wenzelm@9695: weaker than ZF set theory but for most applications this does not matter. If
wenzelm@9695: you prefer {\ML} to Lisp, you will probably prefer HOL to~ZF.
lcp@104:
wenzelm@9695: Previous releases of Isabelle included a different version of~HOL, with
lcp@315: explicit type inference rules~\cite{paulson-COLOG}. This version no longer
lcp@315: exists, but \thydx{ZF} supports a similar style of reasoning.
lcp@104:
wenzelm@9695: HOL has a distinct feel, compared with ZF and CTT. It identifies object-level
wenzelm@9695: types with meta-level types, taking advantage of Isabelle's built-in type
wenzelm@9695: checker. It identifies object-level functions with meta-level functions, so
wenzelm@9695: it uses Isabelle's operations for abstraction and application. There is no
wenzelm@9695: `apply' operator: function applications are written as simply~$f(a)$ rather
wenzelm@9695: than $f{\tt`}a$.
lcp@104:
wenzelm@9695: These identifications allow Isabelle to support HOL particularly nicely, but
wenzelm@9695: they also mean that HOL requires more sophistication from the user --- in
wenzelm@9695: particular, an understanding of Isabelle's type system. Beginners should work
wenzelm@9695: with {\tt show_types} set to {\tt true}. Gain experience by working in
wenzelm@9695: first-order logic before attempting to use higher-order logic. This chapter
wenzelm@9695: assumes familiarity with~FOL.
lcp@104:
lcp@104:
lcp@104: \begin{figure}
lcp@104: \begin{center}
lcp@104: \begin{tabular}{rrr}
lcp@111: \it name &\it meta-type & \it description \\
lcp@315: \cdx{Trueprop}& $bool\To prop$ & coercion to $prop$\\
lcp@315: \cdx{not} & $bool\To bool$ & negation ($\neg$) \\
lcp@315: \cdx{True} & $bool$ & tautology ($\top$) \\
lcp@315: \cdx{False} & $bool$ & absurdity ($\bot$) \\
lcp@315: \cdx{if} & $[bool,\alpha,\alpha]\To\alpha::term$ & conditional \\
lcp@315: \cdx{Inv} & $(\alpha\To\beta)\To(\beta\To\alpha)$ & function inversion\\
lcp@315: \cdx{Let} & $[\alpha,\alpha\To\beta]\To\beta$ & let binder
lcp@104: \end{tabular}
lcp@104: \end{center}
lcp@104: \subcaption{Constants}
lcp@104:
lcp@104: \begin{center}
lcp@315: \index{"@@{\tt\at} symbol}
lcp@315: \index{*"! symbol}\index{*"? symbol}
lcp@315: \index{*"?"! symbol}\index{*"E"X"! symbol}
lcp@104: \begin{tabular}{llrrr}
lcp@315: \it symbol &\it name &\it meta-type & \it description \\
lcp@315: \tt\at & \cdx{Eps} & $(\alpha\To bool)\To\alpha::term$ &
lcp@111: Hilbert description ($\epsilon$) \\
lcp@315: {\tt!~} or \sdx{ALL} & \cdx{All} & $(\alpha::term\To bool)\To bool$ &
lcp@111: universal quantifier ($\forall$) \\
lcp@315: {\tt?~} or \sdx{EX} & \cdx{Ex} & $(\alpha::term\To bool)\To bool$ &
lcp@111: existential quantifier ($\exists$) \\
lcp@315: {\tt?!} or {\tt EX!} & \cdx{Ex1} & $(\alpha::term\To bool)\To bool$ &
lcp@111: unique existence ($\exists!$)
lcp@104: \end{tabular}
lcp@104: \end{center}
lcp@104: \subcaption{Binders}
lcp@104:
lcp@104: \begin{center}
lcp@315: \index{*"= symbol}
lcp@315: \index{&@{\tt\&} symbol}
lcp@315: \index{*"| symbol}
lcp@315: \index{*"-"-"> symbol}
lcp@104: \begin{tabular}{rrrr}
lcp@315: \it symbol & \it meta-type & \it priority & \it description \\
lcp@315: \sdx{o} & $[\beta\To\gamma,\alpha\To\beta]\To (\alpha\To\gamma)$ &
lcp@111: Right 50 & composition ($\circ$) \\
lcp@111: \tt = & $[\alpha::term,\alpha]\To bool$ & Left 50 & equality ($=$) \\
lcp@315: \tt < & $[\alpha::ord,\alpha]\To bool$ & Left 50 & less than ($<$) \\
lcp@315: \tt <= & $[\alpha::ord,\alpha]\To bool$ & Left 50 &
lcp@315: less than or equals ($\leq$)\\
lcp@111: \tt \& & $[bool,bool]\To bool$ & Right 35 & conjunction ($\conj$) \\
lcp@111: \tt | & $[bool,bool]\To bool$ & Right 30 & disjunction ($\disj$) \\
lcp@315: \tt --> & $[bool,bool]\To bool$ & Right 25 & implication ($\imp$)
lcp@104: \end{tabular}
lcp@104: \end{center}
lcp@104: \subcaption{Infixes}
lcp@104: \caption{Syntax of {\tt HOL}} \label{hol-constants}
lcp@104: \end{figure}
lcp@104:
lcp@104:
nipkow@306: \begin{figure}
lcp@315: \index{*let symbol}
lcp@315: \index{*in symbol}
lcp@104: \dquotes
lcp@315: \[\begin{array}{rclcl}
lcp@104: term & = & \hbox{expression of class~$term$} \\
lcp@315: & | & "\at~" id~id^* " . " formula \\
lcp@315: & | &
lcp@315: \multicolumn{3}{l}{"let"~id~"="~term";"\dots";"~id~"="~term~"in"~term}
lcp@315: \\[2ex]
lcp@104: formula & = & \hbox{expression of type~$bool$} \\
lcp@111: & | & term " = " term \\
lcp@111: & | & term " \ttilde= " term \\
lcp@111: & | & term " < " term \\
lcp@111: & | & term " <= " term \\
lcp@111: & | & "\ttilde\ " formula \\
lcp@111: & | & formula " \& " formula \\
lcp@111: & | & formula " | " formula \\
lcp@111: & | & formula " --> " formula \\
lcp@315: & | & "!~~~" id~id^* " . " formula
lcp@111: & | & "ALL~" id~id^* " . " formula \\
lcp@315: & | & "?~~~" id~id^* " . " formula
lcp@111: & | & "EX~~" id~id^* " . " formula \\
lcp@315: & | & "?!~~" id~id^* " . " formula
lcp@111: & | & "EX!~" id~id^* " . " formula
lcp@104: \end{array}
lcp@104: \]
wenzelm@9695: \caption{Full grammar for HOL} \label{hol-grammar}
lcp@104: \end{figure}
lcp@104:
lcp@104:
lcp@104: \section{Syntax}
lcp@315: The type class of higher-order terms is called~\cldx{term}. Type variables
lcp@315: range over this class by default. The equality symbol and quantifiers are
lcp@315: polymorphic over class {\tt term}.
lcp@104:
lcp@315: Class \cldx{ord} consists of all ordered types; the relations $<$ and
lcp@104: $\leq$ are polymorphic over this class, as are the functions
lcp@315: \cdx{mono}, \cdx{min} and \cdx{max}. Three other
lcp@315: type classes --- \cldx{plus}, \cldx{minus} and \cldx{times} --- permit
lcp@104: overloading of the operators {\tt+}, {\tt-} and {\tt*}. In particular,
lcp@104: {\tt-} is overloaded for set difference and subtraction.
lcp@315: \index{*"+ symbol}
lcp@315: \index{*"- symbol}
lcp@315: \index{*"* symbol}
lcp@104:
lcp@104: Figure~\ref{hol-constants} lists the constants (including infixes and
lcp@315: binders), while Fig.\ts\ref{hol-grammar} presents the grammar of
lcp@287: higher-order logic. Note that $a$\verb|~=|$b$ is translated to
lcp@315: $\neg(a=b)$.
lcp@315:
lcp@315: \begin{warn}
wenzelm@9695: HOL has no if-and-only-if connective; logical equivalence is expressed
lcp@315: using equality. But equality has a high priority, as befitting a
lcp@315: relation, while if-and-only-if typically has the lowest priority. Thus,
lcp@315: $\neg\neg P=P$ abbreviates $\neg\neg (P=P)$ and not $(\neg\neg P)=P$.
lcp@315: When using $=$ to mean logical equivalence, enclose both operands in
lcp@315: parentheses.
lcp@315: \end{warn}
lcp@104:
lcp@287: \subsection{Types}\label{HOL-types}
lcp@315: The type of formulae, \tydx{bool}, belongs to class \cldx{term}; thus,
lcp@315: formulae are terms. The built-in type~\tydx{fun}, which constructs function
lcp@315: types, is overloaded with arity {\tt(term,term)term}. Thus, $\sigma\To\tau$
lcp@315: belongs to class~{\tt term} if $\sigma$ and~$\tau$ do, allowing quantification
lcp@315: over functions.
lcp@104:
wenzelm@9695: Types in HOL must be non-empty; otherwise the quantifier rules would be
lcp@315: unsound. I have commented on this elsewhere~\cite[\S7]{paulson-COLOG}.
lcp@315:
lcp@315: \index{type definitions}
lcp@104: Gordon's {\sc hol} system supports {\bf type definitions}. A type is
lcp@104: defined by exhibiting an existing type~$\sigma$, a predicate~$P::\sigma\To
lcp@104: bool$, and a theorem of the form $\exists x::\sigma.P(x)$. Thus~$P$
lcp@104: specifies a non-empty subset of~$\sigma$, and the new type denotes this
lcp@104: subset. New function constants are generated to establish an isomorphism
lcp@104: between the new type and the subset. If type~$\sigma$ involves type
lcp@104: variables $\alpha@1$, \ldots, $\alpha@n$, then the type definition creates
lcp@104: a type constructor $(\alpha@1,\ldots,\alpha@n)ty$ rather than a particular
lcp@344: type. Melham~\cite{melham89} discusses type definitions at length, with
lcp@344: examples.
lcp@104:
lcp@104: Isabelle does not support type definitions at present. Instead, they are
lcp@315: mimicked by explicit definitions of isomorphism functions. The definitions
lcp@315: should be supported by theorems of the form $\exists x::\sigma.P(x)$, but
lcp@315: Isabelle cannot enforce this.
lcp@104:
lcp@104:
lcp@104: \subsection{Binders}
lcp@104: Hilbert's {\bf description} operator~$\epsilon x.P[x]$ stands for some~$a$
wenzelm@9695: satisfying~$P[a]$, if such exists. Since all terms in HOL denote something, a
wenzelm@9695: description is always meaningful, but we do not know its value unless $P[x]$
wenzelm@9695: defines it uniquely. We may write descriptions as \cdx{Eps}($P$) or use the
wenzelm@9695: syntax \hbox{\tt \at $x$.$P[x]$}.
lcp@315:
lcp@315: Existential quantification is defined by
lcp@315: \[ \exists x.P(x) \;\equiv\; P(\epsilon x.P(x)). \]
lcp@104: The unique existence quantifier, $\exists!x.P[x]$, is defined in terms
lcp@104: of~$\exists$ and~$\forall$. An Isabelle binder, it admits nested
lcp@104: quantifications. For instance, $\exists!x y.P(x,y)$ abbreviates
lcp@104: $\exists!x. \exists!y.P(x,y)$; note that this does not mean that there
lcp@104: exists a unique pair $(x,y)$ satisfying~$P(x,y)$.
lcp@104:
lcp@315: \index{*"! symbol}\index{*"? symbol}\index{HOL system@{\sc hol} system}
wenzelm@9695: Quantifiers have two notations. As in Gordon's {\sc hol} system, HOL
lcp@104: uses~{\tt!}\ and~{\tt?}\ to stand for $\forall$ and $\exists$. The
lcp@104: existential quantifier must be followed by a space; thus {\tt?x} is an
lcp@104: unknown, while \verb'? x.f(x)=y' is a quantification. Isabelle's usual
wenzelm@9695: notation for quantifiers, \sdx{ALL} and \sdx{EX}, is also available. Both
wenzelm@9695: notations are accepted for input. The {\ML} reference
lcp@104: \ttindexbold{HOL_quantifiers} governs the output notation. If set to {\tt
wenzelm@9695: true}, then~{\tt!}\ and~{\tt?}\ are displayed; this is the default. If set
lcp@104: to {\tt false}, then~{\tt ALL} and~{\tt EX} are displayed.
lcp@104:
lcp@315: All these binders have priority 10.
lcp@104:
nipkow@306:
lcp@315: \subsection{The \sdx{let} and \sdx{case} constructions}
lcp@315: Local abbreviations can be introduced by a {\tt let} construct whose
lcp@315: syntax appears in Fig.\ts\ref{hol-grammar}. Internally it is translated into
lcp@315: the constant~\cdx{Let}. It can be expanded by rewriting with its
lcp@315: definition, \tdx{Let_def}.
lcp@315:
wenzelm@9695: HOL also defines the basic syntax
lcp@315: \[\dquotes"case"~e~"of"~c@1~"=>"~e@1~"|" \dots "|"~c@n~"=>"~e@n\]
lcp@315: as a uniform means of expressing {\tt case} constructs. Therefore {\tt
lcp@315: case} and \sdx{of} are reserved words. However, so far this is mere
lcp@315: syntax and has no logical meaning. By declaring translations, you can
lcp@315: cause instances of the {\tt case} construct to denote applications of
lcp@315: particular case operators. The patterns supplied for $c@1$,~\ldots,~$c@n$
lcp@315: distinguish among the different case operators. For an example, see the
lcp@315: case construct for lists on page~\pageref{hol-list} below.
nipkow@306:
nipkow@306:
nipkow@306: \begin{figure}
lcp@287: \begin{ttbox}\makeatother
nipkow@453: \tdx{refl} t = (t::'a)
lcp@315: \tdx{subst} [| s=t; P(s) |] ==> P(t::'a)
nipkow@453: \tdx{ext} (!!x::'a. (f(x)::'b) = g(x)) ==> (\%x.f(x)) = (\%x.g(x))
lcp@315: \tdx{impI} (P ==> Q) ==> P-->Q
lcp@315: \tdx{mp} [| P-->Q; P |] ==> Q
lcp@315: \tdx{iff} (P-->Q) --> (Q-->P) --> (P=Q)
lcp@315: \tdx{selectI} P(x::'a) ==> P(@x.P(x))
lcp@315: \tdx{True_or_False} (P=True) | (P=False)
lcp@315: \end{ttbox}
lcp@315: \caption{The {\tt HOL} rules} \label{hol-rules}
lcp@315: \end{figure}
lcp@104:
lcp@104:
lcp@344: \begin{figure}\hfuzz=4pt%suppress "Overfull \hbox" message
lcp@315: \begin{ttbox}\makeatother
nipkow@453: \tdx{True_def} True == ((\%x::bool.x)=(\%x.x))
lcp@344: \tdx{All_def} All == (\%P. P = (\%x.True))
lcp@344: \tdx{Ex_def} Ex == (\%P. P(@x.P(x)))
lcp@344: \tdx{False_def} False == (!P.P)
lcp@344: \tdx{not_def} not == (\%P. P-->False)
lcp@344: \tdx{and_def} op & == (\%P Q. !R. (P-->Q-->R) --> R)
lcp@344: \tdx{or_def} op | == (\%P Q. !R. (P-->R) --> (Q-->R) --> R)
lcp@344: \tdx{Ex1_def} Ex1 == (\%P. ? x. P(x) & (! y. P(y) --> y=x))
lcp@315:
lcp@344: \tdx{Inv_def} Inv == (\%(f::'a=>'b) y. @x. f(x)=y)
lcp@344: \tdx{o_def} op o == (\%(f::'b=>'c) g (x::'a). f(g(x)))
lcp@344: \tdx{if_def} if == (\%P x y.@z::'a.(P=True --> z=x) & (P=False --> z=y))
lcp@344: \tdx{Let_def} Let(s,f) == f(s)
lcp@104: \end{ttbox}
lcp@315: \caption{The {\tt HOL} definitions} \label{hol-defs}
lcp@104: \end{figure}
lcp@104:
lcp@104:
lcp@315: \section{Rules of inference}
wenzelm@9695: Figure~\ref{hol-rules} shows the inference rules of~HOL, with their~{\ML}
wenzelm@9695: names. Some of the rules deserve additional comments:
lcp@315: \begin{ttdescription}
lcp@315: \item[\tdx{ext}] expresses extensionality of functions.
lcp@315: \item[\tdx{iff}] asserts that logically equivalent formulae are
lcp@315: equal.
lcp@315: \item[\tdx{selectI}] gives the defining property of the Hilbert
lcp@315: $\epsilon$-operator. It is a form of the Axiom of Choice. The derived rule
lcp@315: \tdx{select_equality} (see below) is often easier to use.
lcp@315: \item[\tdx{True_or_False}] makes the logic classical.\footnote{In
lcp@315: fact, the $\epsilon$-operator already makes the logic classical, as
lcp@315: shown by Diaconescu; see Paulson~\cite{paulson-COLOG} for details.}
lcp@315: \end{ttdescription}
lcp@315:
wenzelm@9695: HOL follows standard practice in higher-order logic: only a few connectives
wenzelm@9695: are taken as primitive, with the remainder defined obscurely
lcp@344: (Fig.\ts\ref{hol-defs}). Gordon's {\sc hol} system expresses the
lcp@344: corresponding definitions \cite[page~270]{mgordon-hol} using
wenzelm@9695: object-equality~({\tt=}), which is possible because equality in higher-order
wenzelm@9695: logic may equate formulae and even functions over formulae. But theory~HOL,
wenzelm@9695: like all other Isabelle theories, uses meta-equality~({\tt==}) for
wenzelm@9695: definitions.
lcp@315:
lcp@344: Some of the rules mention type variables; for
lcp@344: example, {\tt refl} mentions the type variable~{\tt'a}. This allows you to
lcp@344: instantiate type variables explicitly by calling {\tt res_inst_tac}. By
lcp@344: default, explicit type variables have class \cldx{term}.
lcp@315:
lcp@315: Include type constraints whenever you state a polymorphic goal. Type
lcp@315: inference may otherwise make the goal more polymorphic than you intended,
lcp@315: with confusing results.
lcp@315:
lcp@315: \begin{warn}
lcp@315: If resolution fails for no obvious reason, try setting
lcp@315: \ttindex{show_types} to {\tt true}, causing Isabelle to display types of
lcp@315: terms. Possibly set \ttindex{show_sorts} to {\tt true} as well, causing
lcp@315: Isabelle to display sorts.
lcp@315:
lcp@315: \index{unification!incompleteness of}
lcp@315: Where function types are involved, Isabelle's unification code does not
lcp@315: guarantee to find instantiations for type variables automatically. Be
lcp@315: prepared to use \ttindex{res_inst_tac} instead of {\tt resolve_tac},
lcp@315: possibly instantiating type variables. Setting
lcp@315: \ttindex{Unify.trace_types} to {\tt true} causes Isabelle to report
lcp@315: omitted search paths during unification.\index{tracing!of unification}
lcp@315: \end{warn}
lcp@315:
lcp@315:
lcp@287: \begin{figure}
lcp@104: \begin{ttbox}
lcp@315: \tdx{sym} s=t ==> t=s
lcp@315: \tdx{trans} [| r=s; s=t |] ==> r=t
lcp@315: \tdx{ssubst} [| t=s; P(s) |] ==> P(t::'a)
lcp@315: \tdx{box_equals} [| a=b; a=c; b=d |] ==> c=d
nipkow@453: \tdx{arg_cong} x=y ==> f(x)=f(y)
nipkow@453: \tdx{fun_cong} f=g ==> f(x)=g(x)
lcp@104: \subcaption{Equality}
lcp@104:
lcp@315: \tdx{TrueI} True
lcp@315: \tdx{FalseE} False ==> P
lcp@104:
lcp@315: \tdx{conjI} [| P; Q |] ==> P&Q
lcp@315: \tdx{conjunct1} [| P&Q |] ==> P
lcp@315: \tdx{conjunct2} [| P&Q |] ==> Q
lcp@315: \tdx{conjE} [| P&Q; [| P; Q |] ==> R |] ==> R
lcp@104:
lcp@315: \tdx{disjI1} P ==> P|Q
lcp@315: \tdx{disjI2} Q ==> P|Q
lcp@315: \tdx{disjE} [| P | Q; P ==> R; Q ==> R |] ==> R
lcp@104:
lcp@315: \tdx{notI} (P ==> False) ==> ~ P
lcp@315: \tdx{notE} [| ~ P; P |] ==> R
lcp@315: \tdx{impE} [| P-->Q; P; Q ==> R |] ==> R
lcp@104: \subcaption{Propositional logic}
lcp@104:
lcp@315: \tdx{iffI} [| P ==> Q; Q ==> P |] ==> P=Q
lcp@315: \tdx{iffD1} [| P=Q; P |] ==> Q
lcp@315: \tdx{iffD2} [| P=Q; Q |] ==> P
lcp@315: \tdx{iffE} [| P=Q; [| P --> Q; Q --> P |] ==> R |] ==> R
lcp@104:
lcp@315: \tdx{eqTrueI} P ==> P=True
lcp@315: \tdx{eqTrueE} P=True ==> P
lcp@104: \subcaption{Logical equivalence}
lcp@104:
lcp@104: \end{ttbox}
wenzelm@9695: \caption{Derived rules for HOL} \label{hol-lemmas1}
lcp@104: \end{figure}
lcp@104:
lcp@104:
lcp@287: \begin{figure}
lcp@287: \begin{ttbox}\makeatother
lcp@315: \tdx{allI} (!!x::'a. P(x)) ==> !x. P(x)
lcp@315: \tdx{spec} !x::'a.P(x) ==> P(x)
lcp@315: \tdx{allE} [| !x.P(x); P(x) ==> R |] ==> R
lcp@315: \tdx{all_dupE} [| !x.P(x); [| P(x); !x.P(x) |] ==> R |] ==> R
lcp@104:
lcp@315: \tdx{exI} P(x) ==> ? x::'a.P(x)
lcp@315: \tdx{exE} [| ? x::'a.P(x); !!x. P(x) ==> Q |] ==> Q
lcp@104:
lcp@315: \tdx{ex1I} [| P(a); !!x. P(x) ==> x=a |] ==> ?! x. P(x)
lcp@315: \tdx{ex1E} [| ?! x.P(x); !!x. [| P(x); ! y. P(y) --> y=x |] ==> R
lcp@104: |] ==> R
lcp@104:
lcp@315: \tdx{select_equality} [| P(a); !!x. P(x) ==> x=a |] ==> (@x.P(x)) = a
lcp@104: \subcaption{Quantifiers and descriptions}
lcp@104:
lcp@315: \tdx{ccontr} (~P ==> False) ==> P
lcp@315: \tdx{classical} (~P ==> P) ==> P
lcp@315: \tdx{excluded_middle} ~P | P
lcp@104:
lcp@315: \tdx{disjCI} (~Q ==> P) ==> P|Q
lcp@315: \tdx{exCI} (! x. ~ P(x) ==> P(a)) ==> ? x.P(x)
lcp@315: \tdx{impCE} [| P-->Q; ~ P ==> R; Q ==> R |] ==> R
lcp@315: \tdx{iffCE} [| P=Q; [| P;Q |] ==> R; [| ~P; ~Q |] ==> R |] ==> R
lcp@315: \tdx{notnotD} ~~P ==> P
lcp@315: \tdx{swap} ~P ==> (~Q ==> P) ==> Q
lcp@104: \subcaption{Classical logic}
lcp@104:
lcp@315: \tdx{if_True} if(True,x,y) = x
lcp@315: \tdx{if_False} if(False,x,y) = y
lcp@315: \tdx{if_P} P ==> if(P,x,y) = x
lcp@315: \tdx{if_not_P} ~ P ==> if(P,x,y) = y
lcp@315: \tdx{expand_if} P(if(Q,x,y)) = ((Q --> P(x)) & (~Q --> P(y)))
lcp@104: \subcaption{Conditionals}
lcp@104: \end{ttbox}
lcp@104: \caption{More derived rules} \label{hol-lemmas2}
lcp@104: \end{figure}
lcp@104:
lcp@104:
lcp@104: Some derived rules are shown in Figures~\ref{hol-lemmas1}
lcp@104: and~\ref{hol-lemmas2}, with their {\ML} names. These include natural rules
lcp@104: for the logical connectives, as well as sequent-style elimination rules for
lcp@104: conjunctions, implications, and universal quantifiers.
lcp@104:
lcp@315: Note the equality rules: \tdx{ssubst} performs substitution in
lcp@315: backward proofs, while \tdx{box_equals} supports reasoning by
lcp@104: simplifying both sides of an equation.
lcp@104:
lcp@104:
lcp@104: \begin{figure}
lcp@104: \begin{center}
lcp@104: \begin{tabular}{rrr}
lcp@111: \it name &\it meta-type & \it description \\
lcp@315: \index{{}@\verb'{}' symbol}
lcp@315: \verb|{}| & $\alpha\,set$ & the empty set \\
lcp@315: \cdx{insert} & $[\alpha,\alpha\,set]\To \alpha\,set$
lcp@111: & insertion of element \\
lcp@315: \cdx{Collect} & $(\alpha\To bool)\To\alpha\,set$
lcp@111: & comprehension \\
lcp@315: \cdx{Compl} & $(\alpha\,set)\To\alpha\,set$
lcp@111: & complement \\
lcp@315: \cdx{INTER} & $[\alpha\,set,\alpha\To\beta\,set]\To\beta\,set$
lcp@111: & intersection over a set\\
lcp@315: \cdx{UNION} & $[\alpha\,set,\alpha\To\beta\,set]\To\beta\,set$
lcp@111: & union over a set\\
lcp@594: \cdx{Inter} & $(\alpha\,set)set\To\alpha\,set$
lcp@111: &set of sets intersection \\
lcp@594: \cdx{Union} & $(\alpha\,set)set\To\alpha\,set$
lcp@111: &set of sets union \\
lcp@594: \cdx{Pow} & $\alpha\,set \To (\alpha\,set)set$
lcp@594: & powerset \\[1ex]
lcp@315: \cdx{range} & $(\alpha\To\beta )\To\beta\,set$
lcp@111: & range of a function \\[1ex]
lcp@315: \cdx{Ball}~~\cdx{Bex} & $[\alpha\,set,\alpha\To bool]\To bool$
lcp@111: & bounded quantifiers \\
lcp@315: \cdx{mono} & $(\alpha\,set\To\beta\,set)\To bool$
lcp@111: & monotonicity \\
lcp@315: \cdx{inj}~~\cdx{surj}& $(\alpha\To\beta )\To bool$
lcp@111: & injective/surjective \\
lcp@315: \cdx{inj_onto} & $[\alpha\To\beta ,\alpha\,set]\To bool$
lcp@111: & injective over subset
lcp@104: \end{tabular}
lcp@104: \end{center}
lcp@104: \subcaption{Constants}
lcp@104:
lcp@104: \begin{center}
lcp@104: \begin{tabular}{llrrr}
lcp@315: \it symbol &\it name &\it meta-type & \it priority & \it description \\
lcp@315: \sdx{INT} & \cdx{INTER1} & $(\alpha\To\beta\,set)\To\beta\,set$ & 10 &
lcp@111: intersection over a type\\
lcp@315: \sdx{UN} & \cdx{UNION1} & $(\alpha\To\beta\,set)\To\beta\,set$ & 10 &
lcp@111: union over a type
lcp@104: \end{tabular}
lcp@104: \end{center}
lcp@104: \subcaption{Binders}
lcp@104:
lcp@104: \begin{center}
lcp@315: \index{*"`"` symbol}
lcp@315: \index{*": symbol}
lcp@315: \index{*"<"= symbol}
lcp@104: \begin{tabular}{rrrr}
lcp@315: \it symbol & \it meta-type & \it priority & \it description \\
lcp@111: \tt `` & $[\alpha\To\beta ,\alpha\,set]\To (\beta\,set)$
lcp@111: & Left 90 & image \\
lcp@315: \sdx{Int} & $[\alpha\,set,\alpha\,set]\To\alpha\,set$
lcp@111: & Left 70 & intersection ($\inter$) \\
lcp@315: \sdx{Un} & $[\alpha\,set,\alpha\,set]\To\alpha\,set$
lcp@111: & Left 65 & union ($\union$) \\
lcp@111: \tt: & $[\alpha ,\alpha\,set]\To bool$
lcp@111: & Left 50 & membership ($\in$) \\
lcp@111: \tt <= & $[\alpha\,set,\alpha\,set]\To bool$
lcp@111: & Left 50 & subset ($\subseteq$)
lcp@104: \end{tabular}
lcp@104: \end{center}
lcp@104: \subcaption{Infixes}
lcp@315: \caption{Syntax of the theory {\tt Set}} \label{hol-set-syntax}
lcp@104: \end{figure}
lcp@104:
lcp@104:
lcp@104: \begin{figure}
lcp@104: \begin{center} \tt\frenchspacing
lcp@315: \index{*"! symbol}
lcp@104: \begin{tabular}{rrr}
lcp@111: \it external & \it internal & \it description \\
lcp@315: $a$ \ttilde: $b$ & \ttilde($a$ : $b$) & \rm non-membership\\
lcp@315: \{$a@1$, $\ldots$\} & insert($a@1$, $\ldots$\{\}) & \rm finite set \\
lcp@111: \{$x$.$P[x]$\} & Collect($\lambda x.P[x]$) &
lcp@104: \rm comprehension \\
lcp@315: \sdx{INT} $x$:$A$.$B[x]$ & INTER($A$,$\lambda x.B[x]$) &
lcp@315: \rm intersection \\
lcp@315: \sdx{UN}{\tt\ } $x$:$A$.$B[x]$ & UNION($A$,$\lambda x.B[x]$) &
lcp@315: \rm union \\
lcp@315: \tt ! $x$:$A$.$P[x]$ or \sdx{ALL} $x$:$A$.$P[x]$ &
lcp@315: Ball($A$,$\lambda x.P[x]$) &
lcp@111: \rm bounded $\forall$ \\
lcp@315: \sdx{?} $x$:$A$.$P[x]$ or \sdx{EX}{\tt\ } $x$:$A$.$P[x]$ &
lcp@315: Bex($A$,$\lambda x.P[x]$) & \rm bounded $\exists$
lcp@104: \end{tabular}
lcp@104: \end{center}
lcp@104: \subcaption{Translations}
lcp@104:
lcp@104: \dquotes
lcp@315: \[\begin{array}{rclcl}
lcp@104: term & = & \hbox{other terms\ldots} \\
lcp@111: & | & "\{\}" \\
lcp@111: & | & "\{ " term\; ("," term)^* " \}" \\
lcp@111: & | & "\{ " id " . " formula " \}" \\
lcp@111: & | & term " `` " term \\
lcp@111: & | & term " Int " term \\
lcp@111: & | & term " Un " term \\
lcp@111: & | & "INT~~" id ":" term " . " term \\
lcp@111: & | & "UN~~~" id ":" term " . " term \\
lcp@111: & | & "INT~~" id~id^* " . " term \\
lcp@111: & | & "UN~~~" id~id^* " . " term \\[2ex]
lcp@104: formula & = & \hbox{other formulae\ldots} \\
lcp@111: & | & term " : " term \\
lcp@111: & | & term " \ttilde: " term \\
lcp@111: & | & term " <= " term \\
lcp@315: & | & "!~" id ":" term " . " formula
lcp@111: & | & "ALL " id ":" term " . " formula \\
lcp@315: & | & "?~" id ":" term " . " formula
lcp@111: & | & "EX~~" id ":" term " . " formula
lcp@104: \end{array}
lcp@104: \]
lcp@104: \subcaption{Full Grammar}
lcp@315: \caption{Syntax of the theory {\tt Set} (continued)} \label{hol-set-syntax2}
lcp@104: \end{figure}
lcp@104:
lcp@104:
lcp@104: \section{A formulation of set theory}
lcp@104: Historically, higher-order logic gives a foundation for Russell and
lcp@104: Whitehead's theory of classes. Let us use modern terminology and call them
wenzelm@9695: {\bf sets}, but note that these sets are distinct from those of ZF set theory,
wenzelm@9695: and behave more like ZF classes.
lcp@104: \begin{itemize}
lcp@104: \item
lcp@104: Sets are given by predicates over some type~$\sigma$. Types serve to
lcp@104: define universes for sets, but type checking is still significant.
lcp@104: \item
lcp@104: There is a universal set (for each type). Thus, sets have complements, and
lcp@104: may be defined by absolute comprehension.
lcp@104: \item
lcp@104: Although sets may contain other sets as elements, the containing set must
lcp@104: have a more complex type.
lcp@104: \end{itemize}
wenzelm@9695: Finite unions and intersections have the same behaviour in HOL as they do
wenzelm@9695: in~ZF. In HOL the intersection of the empty set is well-defined, denoting the
wenzelm@9695: universal set for the given type.
lcp@104:
lcp@315:
lcp@315: \subsection{Syntax of set theory}\index{*set type}
wenzelm@9695: HOL's set theory is called \thydx{Set}. The type $\alpha\,set$ is essentially
wenzelm@9695: the same as $\alpha\To bool$. The new type is defined for clarity and to
wenzelm@9695: avoid complications involving function types in unification. Since Isabelle
wenzelm@9695: does not support type definitions (as mentioned in \S\ref{HOL-types}), the
wenzelm@9695: isomorphisms between the two types are declared explicitly. Here they are
wenzelm@9695: natural: {\tt Collect} maps $\alpha\To bool$ to $\alpha\,set$, while \hbox{\tt
wenzelm@9695: op :} maps in the other direction (ignoring argument order).
lcp@104:
lcp@104: Figure~\ref{hol-set-syntax} lists the constants, infixes, and syntax
lcp@104: translations. Figure~\ref{hol-set-syntax2} presents the grammar of the new
lcp@104: constructs. Infix operators include union and intersection ($A\union B$
lcp@104: and $A\inter B$), the subset and membership relations, and the image
lcp@315: operator~{\tt``}\@. Note that $a$\verb|~:|$b$ is translated to
lcp@315: $\neg(a\in b)$.
lcp@315:
lcp@315: The {\tt\{\ldots\}} notation abbreviates finite sets constructed in the
lcp@315: obvious manner using~{\tt insert} and~$\{\}$:
lcp@104: \begin{eqnarray*}
lcp@315: \{a@1, \ldots, a@n\} & \equiv &
lcp@315: {\tt insert}(a@1,\ldots,{\tt insert}(a@n,\{\}))
lcp@104: \end{eqnarray*}
lcp@104:
wenzelm@9695: The set \hbox{\tt\{$x$.$P[x]$\}} consists of all $x$ (of suitable type) that
wenzelm@9695: satisfy~$P[x]$, where $P[x]$ is a formula that may contain free occurrences
wenzelm@9695: of~$x$. This syntax expands to \cdx{Collect}$(\lambda x.P[x])$. It defines
wenzelm@9695: sets by absolute comprehension, which is impossible in~ZF; the type of~$x$
wenzelm@9695: implicitly restricts the comprehension.
lcp@104:
lcp@104: The set theory defines two {\bf bounded quantifiers}:
lcp@104: \begin{eqnarray*}
lcp@315: \forall x\in A.P[x] &\hbox{abbreviates}& \forall x. x\in A\imp P[x] \\
lcp@315: \exists x\in A.P[x] &\hbox{abbreviates}& \exists x. x\in A\conj P[x]
lcp@104: \end{eqnarray*}
lcp@315: The constants~\cdx{Ball} and~\cdx{Bex} are defined
lcp@104: accordingly. Instead of {\tt Ball($A$,$P$)} and {\tt Bex($A$,$P$)} we may
lcp@315: write\index{*"! symbol}\index{*"? symbol}
lcp@315: \index{*ALL symbol}\index{*EX symbol}
lcp@315: %
lcp@315: \hbox{\tt !~$x$:$A$.$P[x]$} and \hbox{\tt ?~$x$:$A$.$P[x]$}. Isabelle's
lcp@315: usual quantifier symbols, \sdx{ALL} and \sdx{EX}, are also accepted
lcp@315: for input. As with the primitive quantifiers, the {\ML} reference
lcp@315: \ttindex{HOL_quantifiers} specifies which notation to use for output.
lcp@104:
lcp@104: Unions and intersections over sets, namely $\bigcup@{x\in A}B[x]$ and
lcp@104: $\bigcap@{x\in A}B[x]$, are written
lcp@315: \sdx{UN}~\hbox{\tt$x$:$A$.$B[x]$} and
lcp@315: \sdx{INT}~\hbox{\tt$x$:$A$.$B[x]$}.
lcp@104:
lcp@315: Unions and intersections over types, namely $\bigcup@x B[x]$ and $\bigcap@x
lcp@315: B[x]$, are written \sdx{UN}~\hbox{\tt$x$.$B[x]$} and
lcp@315: \sdx{INT}~\hbox{\tt$x$.$B[x]$}. They are equivalent to the previous
lcp@315: union and intersection operators when $A$ is the universal set.
lcp@104:
lcp@315: The operators $\bigcup A$ and $\bigcap A$ act upon sets of sets. They are
lcp@315: not binders, but are equal to $\bigcup@{x\in A}x$ and $\bigcap@{x\in A}x$,
lcp@315: respectively.
lcp@315:
lcp@104:
lcp@287: \begin{figure} \underscoreon
lcp@104: \begin{ttbox}
lcp@315: \tdx{mem_Collect_eq} (a : \{x.P(x)\}) = P(a)
lcp@315: \tdx{Collect_mem_eq} \{x.x:A\} = A
lcp@104:
wenzelm@841: \tdx{empty_def} \{\} == \{x.False\}
lcp@315: \tdx{insert_def} insert(a,B) == \{x.x=a\} Un B
lcp@315: \tdx{Ball_def} Ball(A,P) == ! x. x:A --> P(x)
lcp@315: \tdx{Bex_def} Bex(A,P) == ? x. x:A & P(x)
lcp@315: \tdx{subset_def} A <= B == ! x:A. x:B
lcp@315: \tdx{Un_def} A Un B == \{x.x:A | x:B\}
lcp@315: \tdx{Int_def} A Int B == \{x.x:A & x:B\}
lcp@315: \tdx{set_diff_def} A - B == \{x.x:A & x~:B\}
lcp@315: \tdx{Compl_def} Compl(A) == \{x. ~ x:A\}
lcp@315: \tdx{INTER_def} INTER(A,B) == \{y. ! x:A. y: B(x)\}
lcp@315: \tdx{UNION_def} UNION(A,B) == \{y. ? x:A. y: B(x)\}
lcp@315: \tdx{INTER1_def} INTER1(B) == INTER(\{x.True\}, B)
lcp@315: \tdx{UNION1_def} UNION1(B) == UNION(\{x.True\}, B)
lcp@315: \tdx{Inter_def} Inter(S) == (INT x:S. x)
lcp@594: \tdx{Union_def} Union(S) == (UN x:S. x)
lcp@594: \tdx{Pow_def} Pow(A) == \{B. B <= A\}
lcp@315: \tdx{image_def} f``A == \{y. ? x:A. y=f(x)\}
lcp@315: \tdx{range_def} range(f) == \{y. ? x. y=f(x)\}
lcp@315: \tdx{mono_def} mono(f) == !A B. A <= B --> f(A) <= f(B)
lcp@315: \tdx{inj_def} inj(f) == ! x y. f(x)=f(y) --> x=y
lcp@315: \tdx{surj_def} surj(f) == ! y. ? x. y=f(x)
lcp@315: \tdx{inj_onto_def} inj_onto(f,A) == !x:A. !y:A. f(x)=f(y) --> x=y
lcp@315: \end{ttbox}
lcp@315: \caption{Rules of the theory {\tt Set}} \label{hol-set-rules}
lcp@315: \end{figure}
lcp@104:
lcp@104:
lcp@315: \begin{figure} \underscoreon
lcp@315: \begin{ttbox}
lcp@315: \tdx{CollectI} [| P(a) |] ==> a : \{x.P(x)\}
lcp@315: \tdx{CollectD} [| a : \{x.P(x)\} |] ==> P(a)
lcp@315: \tdx{CollectE} [| a : \{x.P(x)\}; P(a) ==> W |] ==> W
lcp@104:
lcp@315: \tdx{ballI} [| !!x. x:A ==> P(x) |] ==> ! x:A. P(x)
lcp@315: \tdx{bspec} [| ! x:A. P(x); x:A |] ==> P(x)
lcp@315: \tdx{ballE} [| ! x:A. P(x); P(x) ==> Q; ~ x:A ==> Q |] ==> Q
lcp@104:
lcp@315: \tdx{bexI} [| P(x); x:A |] ==> ? x:A. P(x)
lcp@315: \tdx{bexCI} [| ! x:A. ~ P(x) ==> P(a); a:A |] ==> ? x:A.P(x)
lcp@315: \tdx{bexE} [| ? x:A. P(x); !!x. [| x:A; P(x) |] ==> Q |] ==> Q
lcp@315: \subcaption{Comprehension and Bounded quantifiers}
lcp@104:
lcp@315: \tdx{subsetI} (!!x.x:A ==> x:B) ==> A <= B
lcp@315: \tdx{subsetD} [| A <= B; c:A |] ==> c:B
lcp@315: \tdx{subsetCE} [| A <= B; ~ (c:A) ==> P; c:B ==> P |] ==> P
lcp@104:
lcp@315: \tdx{subset_refl} A <= A
lcp@315: \tdx{subset_trans} [| A<=B; B<=C |] ==> A<=C
lcp@315:
lcp@471: \tdx{equalityI} [| A <= B; B <= A |] ==> A = B
lcp@315: \tdx{equalityD1} A = B ==> A<=B
lcp@315: \tdx{equalityD2} A = B ==> B<=A
lcp@315: \tdx{equalityE} [| A = B; [| A<=B; B<=A |] ==> P |] ==> P
lcp@315:
lcp@315: \tdx{equalityCE} [| A = B; [| c:A; c:B |] ==> P;
lcp@315: [| ~ c:A; ~ c:B |] ==> P
lcp@315: |] ==> P
lcp@315: \subcaption{The subset and equality relations}
lcp@315: \end{ttbox}
lcp@315: \caption{Derived rules for set theory} \label{hol-set1}
lcp@315: \end{figure}
lcp@315:
lcp@315:
lcp@315: \begin{figure} \underscoreon
lcp@315: \begin{ttbox}
lcp@315: \tdx{emptyE} a : \{\} ==> P
lcp@315:
lcp@315: \tdx{insertI1} a : insert(a,B)
lcp@315: \tdx{insertI2} a : B ==> a : insert(b,B)
lcp@315: \tdx{insertE} [| a : insert(b,A); a=b ==> P; a:A ==> P |] ==> P
lcp@315:
lcp@315: \tdx{ComplI} [| c:A ==> False |] ==> c : Compl(A)
lcp@315: \tdx{ComplD} [| c : Compl(A) |] ==> ~ c:A
lcp@315:
lcp@315: \tdx{UnI1} c:A ==> c : A Un B
lcp@315: \tdx{UnI2} c:B ==> c : A Un B
lcp@315: \tdx{UnCI} (~c:B ==> c:A) ==> c : A Un B
lcp@315: \tdx{UnE} [| c : A Un B; c:A ==> P; c:B ==> P |] ==> P
lcp@315:
lcp@315: \tdx{IntI} [| c:A; c:B |] ==> c : A Int B
lcp@315: \tdx{IntD1} c : A Int B ==> c:A
lcp@315: \tdx{IntD2} c : A Int B ==> c:B
lcp@315: \tdx{IntE} [| c : A Int B; [| c:A; c:B |] ==> P |] ==> P
lcp@315:
lcp@315: \tdx{UN_I} [| a:A; b: B(a) |] ==> b: (UN x:A. B(x))
lcp@315: \tdx{UN_E} [| b: (UN x:A. B(x)); !!x.[| x:A; b:B(x) |] ==> R |] ==> R
lcp@315:
lcp@315: \tdx{INT_I} (!!x. x:A ==> b: B(x)) ==> b : (INT x:A. B(x))
lcp@315: \tdx{INT_D} [| b: (INT x:A. B(x)); a:A |] ==> b: B(a)
lcp@315: \tdx{INT_E} [| b: (INT x:A. B(x)); b: B(a) ==> R; ~ a:A ==> R |] ==> R
lcp@315:
lcp@315: \tdx{UnionI} [| X:C; A:X |] ==> A : Union(C)
lcp@315: \tdx{UnionE} [| A : Union(C); !!X.[| A:X; X:C |] ==> R |] ==> R
lcp@315:
lcp@315: \tdx{InterI} [| !!X. X:C ==> A:X |] ==> A : Inter(C)
lcp@315: \tdx{InterD} [| A : Inter(C); X:C |] ==> A:X
lcp@315: \tdx{InterE} [| A : Inter(C); A:X ==> R; ~ X:C ==> R |] ==> R
lcp@594:
lcp@594: \tdx{PowI} A<=B ==> A: Pow(B)
lcp@594: \tdx{PowD} A: Pow(B) ==> A<=B
lcp@315: \end{ttbox}
lcp@315: \caption{Further derived rules for set theory} \label{hol-set2}
lcp@315: \end{figure}
lcp@315:
lcp@315:
lcp@315: \subsection{Axioms and rules of set theory}
lcp@315: Figure~\ref{hol-set-rules} presents the rules of theory \thydx{Set}. The
lcp@315: axioms \tdx{mem_Collect_eq} and \tdx{Collect_mem_eq} assert
lcp@315: that the functions {\tt Collect} and \hbox{\tt op :} are isomorphisms. Of
lcp@315: course, \hbox{\tt op :} also serves as the membership relation.
lcp@315:
lcp@315: All the other axioms are definitions. They include the empty set, bounded
lcp@315: quantifiers, unions, intersections, complements and the subset relation.
lcp@315: They also include straightforward properties of functions: image~({\tt``}) and
lcp@315: {\tt range}, and predicates concerning monotonicity, injectiveness and
lcp@315: surjectiveness.
lcp@315:
lcp@315: The predicate \cdx{inj_onto} is used for simulating type definitions.
lcp@315: The statement ${\tt inj_onto}(f,A)$ asserts that $f$ is injective on the
lcp@315: set~$A$, which specifies a subset of its domain type. In a type
lcp@315: definition, $f$ is the abstraction function and $A$ is the set of valid
lcp@315: representations; we should not expect $f$ to be injective outside of~$A$.
lcp@315:
lcp@315: \begin{figure} \underscoreon
lcp@315: \begin{ttbox}
lcp@315: \tdx{Inv_f_f} inj(f) ==> Inv(f,f(x)) = x
lcp@315: \tdx{f_Inv_f} y : range(f) ==> f(Inv(f,y)) = y
lcp@315:
lcp@315: %\tdx{Inv_injective}
lcp@315: % [| Inv(f,x)=Inv(f,y); x: range(f); y: range(f) |] ==> x=y
lcp@315: %
lcp@315: \tdx{imageI} [| x:A |] ==> f(x) : f``A
lcp@315: \tdx{imageE} [| b : f``A; !!x.[| b=f(x); x:A |] ==> P |] ==> P
lcp@315:
lcp@315: \tdx{rangeI} f(x) : range(f)
lcp@315: \tdx{rangeE} [| b : range(f); !!x.[| b=f(x) |] ==> P |] ==> P
lcp@315:
lcp@315: \tdx{monoI} [| !!A B. A <= B ==> f(A) <= f(B) |] ==> mono(f)
lcp@315: \tdx{monoD} [| mono(f); A <= B |] ==> f(A) <= f(B)
lcp@315:
lcp@315: \tdx{injI} [| !! x y. f(x) = f(y) ==> x=y |] ==> inj(f)
lcp@315: \tdx{inj_inverseI} (!!x. g(f(x)) = x) ==> inj(f)
lcp@315: \tdx{injD} [| inj(f); f(x) = f(y) |] ==> x=y
lcp@315:
lcp@315: \tdx{inj_ontoI} (!!x y. [| f(x)=f(y); x:A; y:A |] ==> x=y) ==> inj_onto(f,A)
lcp@315: \tdx{inj_ontoD} [| inj_onto(f,A); f(x)=f(y); x:A; y:A |] ==> x=y
lcp@315:
lcp@315: \tdx{inj_onto_inverseI}
lcp@104: (!!x. x:A ==> g(f(x)) = x) ==> inj_onto(f,A)
lcp@315: \tdx{inj_onto_contraD}
lcp@104: [| inj_onto(f,A); x~=y; x:A; y:A |] ==> ~ f(x)=f(y)
lcp@104: \end{ttbox}
lcp@104: \caption{Derived rules involving functions} \label{hol-fun}
lcp@104: \end{figure}
lcp@104:
lcp@104:
lcp@287: \begin{figure} \underscoreon
lcp@104: \begin{ttbox}
lcp@315: \tdx{Union_upper} B:A ==> B <= Union(A)
lcp@315: \tdx{Union_least} [| !!X. X:A ==> X<=C |] ==> Union(A) <= C
lcp@104:
lcp@315: \tdx{Inter_lower} B:A ==> Inter(A) <= B
lcp@315: \tdx{Inter_greatest} [| !!X. X:A ==> C<=X |] ==> C <= Inter(A)
lcp@104:
lcp@315: \tdx{Un_upper1} A <= A Un B
lcp@315: \tdx{Un_upper2} B <= A Un B
lcp@315: \tdx{Un_least} [| A<=C; B<=C |] ==> A Un B <= C
lcp@104:
lcp@315: \tdx{Int_lower1} A Int B <= A
lcp@315: \tdx{Int_lower2} A Int B <= B
lcp@315: \tdx{Int_greatest} [| C<=A; C<=B |] ==> C <= A Int B
lcp@104: \end{ttbox}
lcp@104: \caption{Derived rules involving subsets} \label{hol-subset}
lcp@104: \end{figure}
lcp@104:
lcp@104:
lcp@315: \begin{figure} \underscoreon \hfuzz=4pt%suppress "Overfull \hbox" message
lcp@104: \begin{ttbox}
lcp@315: \tdx{Int_absorb} A Int A = A
lcp@315: \tdx{Int_commute} A Int B = B Int A
lcp@315: \tdx{Int_assoc} (A Int B) Int C = A Int (B Int C)
lcp@315: \tdx{Int_Un_distrib} (A Un B) Int C = (A Int C) Un (B Int C)
lcp@104:
lcp@315: \tdx{Un_absorb} A Un A = A
lcp@315: \tdx{Un_commute} A Un B = B Un A
lcp@315: \tdx{Un_assoc} (A Un B) Un C = A Un (B Un C)
lcp@315: \tdx{Un_Int_distrib} (A Int B) Un C = (A Un C) Int (B Un C)
lcp@104:
lcp@315: \tdx{Compl_disjoint} A Int Compl(A) = \{x.False\}
lcp@315: \tdx{Compl_partition} A Un Compl(A) = \{x.True\}
lcp@315: \tdx{double_complement} Compl(Compl(A)) = A
lcp@315: \tdx{Compl_Un} Compl(A Un B) = Compl(A) Int Compl(B)
lcp@315: \tdx{Compl_Int} Compl(A Int B) = Compl(A) Un Compl(B)
lcp@104:
lcp@315: \tdx{Union_Un_distrib} Union(A Un B) = Union(A) Un Union(B)
lcp@315: \tdx{Int_Union} A Int Union(B) = (UN C:B. A Int C)
lcp@315: \tdx{Un_Union_image} (UN x:C. A(x) Un B(x)) = Union(A``C) Un Union(B``C)
lcp@104:
lcp@315: \tdx{Inter_Un_distrib} Inter(A Un B) = Inter(A) Int Inter(B)
lcp@315: \tdx{Un_Inter} A Un Inter(B) = (INT C:B. A Un C)
lcp@315: \tdx{Int_Inter_image} (INT x:C. A(x) Int B(x)) = Inter(A``C) Int Inter(B``C)
lcp@104: \end{ttbox}
lcp@104: \caption{Set equalities} \label{hol-equalities}
lcp@104: \end{figure}
lcp@104:
lcp@104:
lcp@315: Figures~\ref{hol-set1} and~\ref{hol-set2} present derived rules. Most are
wenzelm@9695: obvious and resemble rules of Isabelle's ZF set theory. Certain rules, such
wenzelm@9695: as \tdx{subsetCE}, \tdx{bexCI} and \tdx{UnCI}, are designed for classical
wenzelm@9695: reasoning; the rules \tdx{subsetD}, \tdx{bexI}, \tdx{Un1} and~\tdx{Un2} are
wenzelm@9695: not strictly necessary but yield more natural proofs. Similarly,
wenzelm@9695: \tdx{equalityCE} supports classical reasoning about extensionality, after the
wenzelm@9695: fashion of \tdx{iffCE}. See the file {\tt HOL/Set.ML} for proofs pertaining
wenzelm@9695: to set theory.
lcp@104:
lcp@315: Figure~\ref{hol-fun} presents derived inference rules involving functions.
lcp@315: They also include rules for \cdx{Inv}, which is defined in theory~{\tt
lcp@315: HOL}; note that ${\tt Inv}(f)$ applies the Axiom of Choice to yield an
lcp@315: inverse of~$f$. They also include natural deduction rules for the image
lcp@315: and range operators, and for the predicates {\tt inj} and {\tt inj_onto}.
lcp@315: Reasoning about function composition (the operator~\sdx{o}) and the
lcp@315: predicate~\cdx{surj} is done simply by expanding the definitions. See
lcp@315: the file {\tt HOL/fun.ML} for a complete listing of the derived rules.
lcp@104:
lcp@104: Figure~\ref{hol-subset} presents lattice properties of the subset relation.
lcp@315: Unions form least upper bounds; non-empty intersections form greatest lower
lcp@315: bounds. Reasoning directly about subsets often yields clearer proofs than
lcp@315: reasoning about the membership relation. See the file {\tt HOL/subset.ML}.
lcp@104:
lcp@315: Figure~\ref{hol-equalities} presents many common set equalities. They
lcp@315: include commutative, associative and distributive laws involving unions,
lcp@315: intersections and complements. The proofs are mostly trivial, using the
lcp@315: classical reasoner; see file {\tt HOL/equalities.ML}.
lcp@104:
lcp@104:
lcp@287: \begin{figure}
lcp@315: \begin{constants}
lcp@344: \it symbol & \it meta-type & & \it description \\
lcp@315: \cdx{Pair} & $[\alpha,\beta]\To \alpha\times\beta$
lcp@315: & & ordered pairs $\langle a,b\rangle$ \\
lcp@315: \cdx{fst} & $\alpha\times\beta \To \alpha$ & & first projection\\
lcp@315: \cdx{snd} & $\alpha\times\beta \To \beta$ & & second projection\\
lcp@705: \cdx{split} & $[[\alpha,\beta]\To\gamma, \alpha\times\beta] \To \gamma$
lcp@315: & & generalized projection\\
lcp@315: \cdx{Sigma} &
lcp@287: $[\alpha\,set, \alpha\To\beta\,set]\To(\alpha\times\beta)set$ &
lcp@315: & general sum of sets
lcp@315: \end{constants}
lcp@315: \begin{ttbox}\makeatletter
lcp@315: \tdx{fst_def} fst(p) == @a. ? b. p =
lcp@315: \tdx{snd_def} snd(p) == @b. ? a. p =
lcp@705: \tdx{split_def} split(c,p) == c(fst(p),snd(p))
lcp@315: \tdx{Sigma_def} Sigma(A,B) == UN x:A. UN y:B(x). \{\}
lcp@104:
lcp@104:
lcp@315: \tdx{Pair_inject} [| = ; [| a=a'; b=b' |] ==> R |] ==> R
lcp@349: \tdx{fst_conv} fst() = a
lcp@349: \tdx{snd_conv} snd() = b
lcp@705: \tdx{split} split(c, ) = c(a,b)
lcp@104:
lcp@315: \tdx{surjective_pairing} p =
lcp@104:
lcp@315: \tdx{SigmaI} [| a:A; b:B(a) |] ==> : Sigma(A,B)
lcp@287:
lcp@315: \tdx{SigmaE} [| c: Sigma(A,B);
lcp@287: !!x y.[| x:A; y:B(x); c= |] ==> P |] ==> P
lcp@104: \end{ttbox}
lcp@315: \caption{Type $\alpha\times\beta$}\label{hol-prod}
lcp@104: \end{figure}
lcp@104:
lcp@104:
lcp@287: \begin{figure}
lcp@315: \begin{constants}
lcp@344: \it symbol & \it meta-type & & \it description \\
lcp@315: \cdx{Inl} & $\alpha \To \alpha+\beta$ & & first injection\\
lcp@315: \cdx{Inr} & $\beta \To \alpha+\beta$ & & second injection\\
lcp@705: \cdx{sum_case} & $[\alpha\To\gamma, \beta\To\gamma, \alpha+\beta] \To\gamma$
lcp@315: & & conditional
lcp@315: \end{constants}
lcp@315: \begin{ttbox}\makeatletter
lcp@705: \tdx{sum_case_def} sum_case == (\%f g p. @z. (!x. p=Inl(x) --> z=f(x)) &
lcp@315: (!y. p=Inr(y) --> z=g(y)))
lcp@104:
lcp@315: \tdx{Inl_not_Inr} ~ Inl(a)=Inr(b)
lcp@104:
lcp@315: \tdx{inj_Inl} inj(Inl)
lcp@315: \tdx{inj_Inr} inj(Inr)
lcp@104:
lcp@315: \tdx{sumE} [| !!x::'a. P(Inl(x)); !!y::'b. P(Inr(y)) |] ==> P(s)
lcp@104:
lcp@705: \tdx{sum_case_Inl} sum_case(f, g, Inl(x)) = f(x)
lcp@705: \tdx{sum_case_Inr} sum_case(f, g, Inr(x)) = g(x)
lcp@104:
lcp@705: \tdx{surjective_sum} sum_case(\%x::'a. f(Inl(x)), \%y::'b. f(Inr(y)), s) = f(s)
lcp@104: \end{ttbox}
lcp@315: \caption{Type $\alpha+\beta$}\label{hol-sum}
lcp@104: \end{figure}
lcp@104:
lcp@104:
lcp@344: \section{Generic packages and classical reasoning}
wenzelm@9695: HOL instantiates most of Isabelle's generic packages; see {\tt HOL/ROOT.ML}
wenzelm@9695: for details.
lcp@344: \begin{itemize}
wenzelm@9695: \item Because it includes a general substitution rule, HOL instantiates the
wenzelm@9695: tactic {\tt hyp_subst_tac}, which substitutes for an equality throughout a
wenzelm@9695: subgoal and its hypotheses.
lcp@344: \item
lcp@344: It instantiates the simplifier, defining~\ttindexbold{HOL_ss} as the
lcp@344: simplification set for higher-order logic. Equality~($=$), which also
lcp@344: expresses logical equivalence, may be used for rewriting. See the file
lcp@344: {\tt HOL/simpdata.ML} for a complete listing of the simplification
lcp@344: rules.
lcp@344: \item
lcp@344: It instantiates the classical reasoner, as described below.
lcp@344: \end{itemize}
wenzelm@9695: HOL derives classical introduction rules for $\disj$ and~$\exists$, as well as
wenzelm@9695: classical elimination rules for~$\imp$ and~$\bimp$, and the swap rule; recall
wenzelm@9695: Fig.\ts\ref{hol-lemmas2} above.
lcp@344:
wenzelm@9695: The classical reasoner is set up as the structure {\tt Classical}. This
wenzelm@9695: structure is open, so {\ML} identifiers such as {\tt step_tac}, {\tt
wenzelm@9695: fast_tac}, {\tt best_tac}, etc., refer to it. HOL defines the following
wenzelm@9695: classical rule sets:
lcp@344: \begin{ttbox}
lcp@344: prop_cs : claset
lcp@344: HOL_cs : claset
lcp@344: set_cs : claset
lcp@344: \end{ttbox}
lcp@344: \begin{ttdescription}
lcp@344: \item[\ttindexbold{prop_cs}] contains the propositional rules, namely
lcp@344: those for~$\top$, $\bot$, $\conj$, $\disj$, $\neg$, $\imp$ and~$\bimp$,
lcp@344: along with the rule~{\tt refl}.
lcp@344:
lcp@344: \item[\ttindexbold{HOL_cs}] extends {\tt prop_cs} with the safe rules
lcp@344: {\tt allI} and~{\tt exE} and the unsafe rules {\tt allE}
lcp@344: and~{\tt exI}, as well as rules for unique existence. Search using
lcp@344: this classical set is incomplete: quantified formulae are used at most
lcp@344: once.
lcp@344:
lcp@344: \item[\ttindexbold{set_cs}] extends {\tt HOL_cs} with rules for the bounded
lcp@344: quantifiers, subsets, comprehensions, unions and intersections,
lcp@344: complements, finite sets, images and ranges.
lcp@344: \end{ttdescription}
lcp@344: \noindent
lcp@344: See \iflabelundefined{chap:classical}{the {\em Reference Manual\/}}%
lcp@344: {Chap.\ts\ref{chap:classical}}
lcp@344: for more discussion of classical proof methods.
lcp@344:
lcp@344:
lcp@104: \section{Types}
lcp@104: The basic higher-order logic is augmented with a tremendous amount of
lcp@315: material, including support for recursive function and type definitions. A
lcp@315: detailed discussion appears elsewhere~\cite{paulson-coind}. The simpler
lcp@315: definitions are the same as those used the {\sc hol} system, but my
lcp@315: treatment of recursive types differs from Melham's~\cite{melham89}. The
lcp@315: present section describes product, sum, natural number and list types.
lcp@104:
lcp@315: \subsection{Product and sum types}\index{*"* type}\index{*"+ type}
lcp@315: Theory \thydx{Prod} defines the product type $\alpha\times\beta$, with
lcp@315: the ordered pair syntax {\tt<$a$,$b$>}. Theory \thydx{Sum} defines the
lcp@315: sum type $\alpha+\beta$. These use fairly standard constructions; see
lcp@315: Figs.\ts\ref{hol-prod} and~\ref{hol-sum}. Because Isabelle does not
lcp@315: support abstract type definitions, the isomorphisms between these types and
lcp@315: their representations are made explicitly.
lcp@104:
lcp@104: Most of the definitions are suppressed, but observe that the projections
lcp@104: and conditionals are defined as descriptions. Their properties are easily
lcp@344: proved using \tdx{select_equality}.
lcp@104:
lcp@287: \begin{figure}
lcp@315: \index{*"< symbol}
lcp@315: \index{*"* symbol}
lcp@344: \index{*div symbol}
lcp@344: \index{*mod symbol}
lcp@315: \index{*"+ symbol}
lcp@315: \index{*"- symbol}
lcp@315: \begin{constants}
lcp@315: \it symbol & \it meta-type & \it priority & \it description \\
lcp@315: \cdx{0} & $nat$ & & zero \\
lcp@315: \cdx{Suc} & $nat \To nat$ & & successor function\\
lcp@705: \cdx{nat_case} & $[\alpha, nat\To\alpha, nat] \To\alpha$
lcp@315: & & conditional\\
lcp@315: \cdx{nat_rec} & $[nat, \alpha, [nat, \alpha]\To\alpha] \To \alpha$
lcp@315: & & primitive recursor\\
lcp@315: \cdx{pred_nat} & $(nat\times nat) set$ & & predecessor relation\\
lcp@111: \tt * & $[nat,nat]\To nat$ & Left 70 & multiplication \\
lcp@344: \tt div & $[nat,nat]\To nat$ & Left 70 & division\\
lcp@344: \tt mod & $[nat,nat]\To nat$ & Left 70 & modulus\\
lcp@111: \tt + & $[nat,nat]\To nat$ & Left 65 & addition\\
lcp@111: \tt - & $[nat,nat]\To nat$ & Left 65 & subtraction
lcp@315: \end{constants}
lcp@104: \subcaption{Constants and infixes}
lcp@104:
lcp@287: \begin{ttbox}\makeatother
lcp@705: \tdx{nat_case_def} nat_case == (\%a f n. @z. (n=0 --> z=a) &
lcp@344: (!x. n=Suc(x) --> z=f(x)))
lcp@315: \tdx{pred_nat_def} pred_nat == \{p. ? n. p = \}
lcp@315: \tdx{less_def} m:pred_nat^+
lcp@315: \tdx{nat_rec_def} nat_rec(n,c,d) ==
lcp@705: wfrec(pred_nat, n, nat_case(\%g.c, \%m g. d(m,g(m))))
lcp@104:
lcp@344: \tdx{add_def} m+n == nat_rec(m, n, \%u v.Suc(v))
lcp@344: \tdx{diff_def} m-n == nat_rec(n, m, \%u v. nat_rec(v, 0, \%x y.x))
lcp@344: \tdx{mult_def} m*n == nat_rec(m, 0, \%u v. n + v)
lcp@344: \tdx{mod_def} m mod n == wfrec(trancl(pred_nat), m, \%j f. if(j P(Suc(k)) |] ==> P(n)
lcp@104:
lcp@315: \tdx{Suc_not_Zero} Suc(m) ~= 0
lcp@315: \tdx{inj_Suc} inj(Suc)
lcp@315: \tdx{n_not_Suc_n} n~=Suc(n)
lcp@104: \subcaption{Basic properties}
lcp@104:
lcp@315: \tdx{pred_natI} : pred_nat
lcp@315: \tdx{pred_natE}
lcp@104: [| p : pred_nat; !!x n. [| p = |] ==> R |] ==> R
lcp@104:
lcp@705: \tdx{nat_case_0} nat_case(a, f, 0) = a
lcp@705: \tdx{nat_case_Suc} nat_case(a, f, Suc(k)) = f(k)
lcp@104:
lcp@315: \tdx{wf_pred_nat} wf(pred_nat)
lcp@315: \tdx{nat_rec_0} nat_rec(0,c,h) = c
lcp@315: \tdx{nat_rec_Suc} nat_rec(Suc(n), c, h) = h(n, nat_rec(n,c,h))
lcp@104: \subcaption{Case analysis and primitive recursion}
lcp@104:
lcp@315: \tdx{less_trans} [| i i ~ m P(m) |] ==> P(n) |] ==> P(n)
lcp@104:
lcp@315: \tdx{less_linear} m P(x#xs)) |] ==> P(l)
lcp@104:
lcp@315: \tdx{Cons_not_Nil} (x # xs) ~= []
lcp@315: \tdx{Cons_Cons_eq} ((x # xs) = (y # ys)) = (x=y & xs=ys)
nipkow@306: \subcaption{Induction and freeness}
lcp@104: \end{ttbox}
lcp@315: \caption{The theory \thydx{List}} \label{hol-list}
lcp@104: \end{figure}
lcp@104:
nipkow@306: \begin{figure}
nipkow@306: \begin{ttbox}\makeatother
lcp@471: \tdx{list_rec_Nil} list_rec([],c,h) = c
lcp@471: \tdx{list_rec_Cons} list_rec(a#l, c, h) = h(a, l, list_rec(l,c,h))
lcp@315:
lcp@705: \tdx{list_case_Nil} list_case(c, h, []) = c
lcp@705: \tdx{list_case_Cons} list_case(c, h, x#xs) = h(x, xs)
lcp@315:
lcp@471: \tdx{map_Nil} map(f,[]) = []
lcp@471: \tdx{map_Cons} map(f, x \# xs) = f(x) \# map(f,xs)
lcp@315:
lcp@471: \tdx{null_Nil} null([]) = True
lcp@471: \tdx{null_Cons} null(x#xs) = False
lcp@315:
lcp@471: \tdx{hd_Cons} hd(x#xs) = x
lcp@471: \tdx{tl_Cons} tl(x#xs) = xs
lcp@315:
lcp@471: \tdx{ttl_Nil} ttl([]) = []
lcp@471: \tdx{ttl_Cons} ttl(x#xs) = xs
lcp@315:
lcp@471: \tdx{append_Nil} [] @ ys = ys
lcp@471: \tdx{append_Cons} (x#xs) \at ys = x # xs \at ys
lcp@315:
lcp@471: \tdx{mem_Nil} x mem [] = False
lcp@471: \tdx{mem_Cons} x mem (y#ys) = if(y=x, True, x mem ys)
lcp@315:
lcp@471: \tdx{filter_Nil} filter(P, []) = []
lcp@471: \tdx{filter_Cons} filter(P,x#xs) = if(P(x), x#filter(P,xs), filter(P,xs))
lcp@315:
lcp@471: \tdx{list_all_Nil} list_all(P,[]) = True
lcp@471: \tdx{list_all_Cons} list_all(P, x#xs) = (P(x) & list_all(P, xs))
nipkow@306: \end{ttbox}
nipkow@306: \caption{Rewrite rules for lists} \label{hol-list-simps}
nipkow@306: \end{figure}
lcp@104:
lcp@315:
lcp@315: \subsection{The type constructor for lists, {\tt list}}
lcp@315: \index{*list type}
lcp@315:
wenzelm@9695: HOL's definition of lists is an example of an experimental method for handling
wenzelm@9695: recursive data types. Figure~\ref{hol-list} presents the theory \thydx{List}:
wenzelm@9695: the basic list operations with their types and properties.
lcp@104:
lcp@344: The \sdx{case} construct is defined by the following translation:
lcp@315: {\dquotes
lcp@315: \begin{eqnarray*}
lcp@344: \begin{array}{r@{\;}l@{}l}
lcp@315: "case " e " of" & "[]" & " => " a\\
lcp@315: "|" & x"\#"xs & " => " b
lcp@315: \end{array}
lcp@315: & \equiv &
lcp@705: "list_case"(a, \lambda x\;xs.b, e)
lcp@344: \end{eqnarray*}}%
lcp@315: The theory includes \cdx{list_rec}, a primitive recursion operator
lcp@315: for lists. It is derived from well-founded recursion, a general principle
lcp@315: that can express arbitrary total recursive functions.
lcp@104:
lcp@315: The simpset \ttindex{list_ss} contains, along with additional useful lemmas,
lcp@315: the basic rewrite rules that appear in Fig.\ts\ref{hol-list-simps}.
lcp@315:
lcp@315: The tactic {\tt\ttindex{list_ind_tac} "$xs$" $i$} performs induction over the
lcp@315: variable~$xs$ in subgoal~$i$.
lcp@315:
lcp@315:
nipkow@464: \section{Datatype declarations}
nipkow@464: \index{*datatype|(}
nipkow@464:
nipkow@464: \underscoreon
nipkow@464:
nipkow@464: It is often necessary to extend a theory with \ML-like datatypes. This
nipkow@464: extension consists of the new type, declarations of its constructors and
nipkow@464: rules that describe the new type. The theory definition section {\tt
nipkow@464: datatype} represents a compact way of doing this.
nipkow@464:
nipkow@464:
nipkow@464: \subsection{Foundations}
nipkow@464:
nipkow@464: A datatype declaration has the following general structure:
nipkow@464: \[ \mbox{\tt datatype}~ (\alpha_1,\dots,\alpha_n)t ~=~
nipkow@580: C_1(\tau_{11},\dots,\tau_{1k_1}) ~\mid~ \dots ~\mid~
nipkow@580: C_m(\tau_{m1},\dots,\tau_{mk_m})
nipkow@464: \]
nipkow@580: where $\alpha_i$ are type variables, $C_i$ are distinct constructor names and
nipkow@464: $\tau_{ij}$ are one of the following:
nipkow@464: \begin{itemize}
nipkow@464: \item type variables $\alpha_1,\dots,\alpha_n$,
nipkow@464: \item types $(\beta_1,\dots,\beta_l)s$ where $s$ is a previously declared
nipkow@464: type or type synonym and $\{\beta_1,\dots,\beta_l\} \subseteq
nipkow@464: \{\alpha_1,\dots,\alpha_n\}$,
nipkow@464: \item the newly defined type $(\alpha_1,\dots,\alpha_n)t$ \footnote{This
nipkow@464: makes it a recursive type. To ensure that the new type is not empty at
nipkow@464: least one constructor must consist of only non-recursive type
nipkow@464: components.}
nipkow@464: \end{itemize}
nipkow@580: If you would like one of the $\tau_{ij}$ to be a complex type expression
nipkow@580: $\tau$ you need to declare a new type synonym $syn = \tau$ first and use
nipkow@580: $syn$ in place of $\tau$. Of course this does not work if $\tau$ mentions the
nipkow@580: recursive type itself, thus ruling out problematic cases like \[ \mbox{\tt
nipkow@580: datatype}~ t ~=~ C(t \To t) \] together with unproblematic ones like \[
nipkow@580: \mbox{\tt datatype}~ t ~=~ C(t~list). \]
nipkow@580:
nipkow@464: The constructors are automatically defined as functions of their respective
nipkow@464: type:
nipkow@580: \[ C_j : [\tau_{j1},\dots,\tau_{jk_j}] \To (\alpha_1,\dots,\alpha_n)t \]
nipkow@464: These functions have certain {\em freeness} properties:
nipkow@464: \begin{description}
nipkow@465: \item[\tt distinct] They are distinct:
nipkow@580: \[ C_i(x_1,\dots,x_{k_i}) \neq C_j(y_1,\dots,y_{k_j}) \qquad
nipkow@465: \mbox{for all}~ i \neq j.
nipkow@465: \]
nipkow@464: \item[\tt inject] They are injective:
nipkow@580: \[ (C_j(x_1,\dots,x_{k_j}) = C_j(y_1,\dots,y_{k_j})) =
nipkow@464: (x_1 = y_1 \land \dots \land x_{k_j} = y_{k_j})
nipkow@464: \]
nipkow@464: \end{description}
nipkow@464: Because the number of inequalities is quadratic in the number of
nipkow@464: constructors, a different method is used if their number exceeds
nipkow@464: a certain value, currently 4. In that case every constructor is mapped to a
nipkow@464: natural number
nipkow@464: \[
nipkow@464: \begin{array}{lcl}
nipkow@580: \mbox{\it t\_ord}(C_1(x_1,\dots,x_{k_1})) & = & 0 \\
nipkow@464: & \vdots & \\
nipkow@580: \mbox{\it t\_ord}(C_m(x_1,\dots,x_{k_m})) & = & m-1
nipkow@464: \end{array}
nipkow@464: \]
nipkow@465: and distinctness of constructors is expressed by:
nipkow@464: \[
nipkow@464: \mbox{\it t\_ord}(x) \neq \mbox{\it t\_ord}(y) \Imp x \neq y.
nipkow@464: \]
nipkow@464: In addition a structural induction axiom {\tt induct} is provided:
nipkow@464: \[
nipkow@464: \infer{P(x)}
nipkow@464: {\begin{array}{lcl}
nipkow@464: \Forall x_1\dots x_{k_1}.
nipkow@464: \List{P(x_{r_{11}}); \dots; P(x_{r_{1l_1}})} &
nipkow@580: \Imp & P(C_1(x_1,\dots,x_{k_1})) \\
nipkow@464: & \vdots & \\
nipkow@464: \Forall x_1\dots x_{k_m}.
nipkow@464: \List{P(x_{r_{m1}}); \dots; P(x_{r_{ml_m}})} &
nipkow@580: \Imp & P(C_m(x_1,\dots,x_{k_m}))
nipkow@464: \end{array}}
nipkow@464: \]
nipkow@464: where $\{r_{j1},\dots,r_{jl_j}\} = \{i \in \{1,\dots k_j\} ~\mid~ \tau_{ji}
nipkow@464: = (\alpha_1,\dots,\alpha_n)t \}$, i.e.\ the property $P$ can be assumed for
nipkow@464: all arguments of the recursive type.
nipkow@464:
nipkow@465: The type also comes with an \ML-like \sdx{case}-construct:
nipkow@464: \[
nipkow@464: \begin{array}{rrcl}
nipkow@580: \mbox{\tt case}~e~\mbox{\tt of} & C_1(x_{11},\dots,x_{1k_1}) & \To & e_1 \\
nipkow@464: \vdots \\
nipkow@580: \mid & C_m(x_{m1},\dots,x_{mk_m}) & \To & e_m
nipkow@464: \end{array}
nipkow@464: \]
nipkow@464: In contrast to \ML, {\em all} constructors must be present, their order is
nipkow@464: fixed, and nested patterns are not supported.
nipkow@464:
nipkow@464:
nipkow@464: \subsection{Defining datatypes}
nipkow@464:
nipkow@464: A datatype is defined in a theory definition file using the keyword {\tt
nipkow@464: datatype}. The definition following {\tt datatype} must conform to the
nipkow@464: syntax of {\em typedecl} specified in Fig.~\ref{datatype-grammar} and must
nipkow@464: obey the rules in the previous section. As a result the theory is extended
nipkow@464: with the new type, the constructors, and the theorems listed in the previous
nipkow@464: section.
nipkow@464:
nipkow@464: \begin{figure}
nipkow@464: \begin{rail}
nipkow@464: typedecl : typevarlist id '=' (cons + '|')
nipkow@464: ;
nipkow@464: cons : (id | string) ( () | '(' (typ + ',') ')' ) ( () | mixfix )
nipkow@464: ;
nipkow@464: typ : typevarlist id
nipkow@464: | tid
lcp@594: ;
nipkow@464: typevarlist : () | tid | '(' (tid + ',') ')'
nipkow@464: ;
nipkow@464: \end{rail}
nipkow@464: \caption{Syntax of datatype declarations}
nipkow@464: \label{datatype-grammar}
nipkow@464: \end{figure}
nipkow@464:
nipkow@465: Reading the theory file produces a structure which, in addition to the usual
nipkow@464: components, contains a structure named $t$ for each datatype $t$ defined in
nipkow@465: the file.\footnote{Otherwise multiple datatypes in the same theory file would
nipkow@465: lead to name clashes.} Each structure $t$ contains the following elements:
nipkow@464: \begin{ttbox}
nipkow@465: val distinct : thm list
nipkow@465: val inject : thm list
nipkow@464: val induct : thm
nipkow@464: val cases : thm list
nipkow@464: val simps : thm list
nipkow@464: val induct_tac : string -> int -> tactic
nipkow@464: \end{ttbox}
nipkow@465: {\tt distinct}, {\tt inject} and {\tt induct} contain the theorems described
nipkow@465: above. For convenience {\tt distinct} contains inequalities in both
nipkow@465: directions.
nipkow@464: \begin{warn}
nipkow@464: If there are five or more constructors, the {\em t\_ord} scheme is used for
nipkow@465: {\tt distinct}. In this case the theory {\tt Arith} must be contained
nipkow@465: in the current theory, if necessary by including it explicitly.
nipkow@464: \end{warn}
nipkow@465: The reduction rules of the {\tt case}-construct are in {\tt cases}. All
nipkow@465: theorems from {\tt distinct}, {\tt inject} and {\tt cases} are combined in
lcp@1086: {\tt simps} for use with the simplifier. The tactic {\verb$induct_tac$~{\em
lcp@1086: var i}\/} applies structural induction over variable {\em var} to
nipkow@464: subgoal {\em i}.
nipkow@464:
nipkow@464:
nipkow@464: \subsection{Examples}
nipkow@464:
nipkow@464: \subsubsection{The datatype $\alpha~list$}
nipkow@464:
nipkow@465: We want to define the type $\alpha~list$.\footnote{Of course there is a list
nipkow@465: type in HOL already. This is only an example.} To do this we have to build
nipkow@465: a new theory that contains the type definition. We start from {\tt HOL}.
nipkow@464: \begin{ttbox}
nipkow@464: MyList = HOL +
nipkow@464: datatype 'a list = Nil | Cons ('a, 'a list)
nipkow@464: end
nipkow@464: \end{ttbox}
nipkow@465: After loading the theory (\verb$use_thy "MyList"$), we can prove
nipkow@465: $Cons(x,xs)\neq xs$. First we build a suitable simpset for the simplifier:
nipkow@464: \begin{ttbox}
nipkow@464: val mylist_ss = HOL_ss addsimps MyList.list.simps;
nipkow@464: goal MyList.thy "!x. Cons(x,xs) ~= xs";
nipkow@464: {\out Level 0}
nipkow@464: {\out ! x. Cons(x, xs) ~= xs}
nipkow@464: {\out 1. ! x. Cons(x, xs) ~= xs}
nipkow@464: \end{ttbox}
nipkow@464: This can be proved by the structural induction tactic:
nipkow@464: \begin{ttbox}
nipkow@464: by (MyList.list.induct_tac "xs" 1);
nipkow@464: {\out Level 1}
nipkow@464: {\out ! x. Cons(x, xs) ~= xs}
nipkow@464: {\out 1. ! x. Cons(x, Nil) ~= Nil}
nipkow@464: {\out 2. !!a list.}
nipkow@464: {\out ! x. Cons(x, list) ~= list ==>}
nipkow@464: {\out ! x. Cons(x, Cons(a, list)) ~= Cons(a, list)}
nipkow@464: \end{ttbox}
nipkow@465: The first subgoal can be proved with the simplifier and the distinctness
nipkow@465: axioms which are part of \verb$mylist_ss$.
nipkow@464: \begin{ttbox}
nipkow@464: by (simp_tac mylist_ss 1);
nipkow@464: {\out Level 2}
nipkow@464: {\out ! x. Cons(x, xs) ~= xs}
nipkow@464: {\out 1. !!a list.}
nipkow@464: {\out ! x. Cons(x, list) ~= list ==>}
nipkow@464: {\out ! x. Cons(x, Cons(a, list)) ~= Cons(a, list)}
nipkow@464: \end{ttbox}
nipkow@465: Using the freeness axioms we can quickly prove the remaining goal.
nipkow@464: \begin{ttbox}
nipkow@464: by (asm_simp_tac mylist_ss 1);
nipkow@464: {\out Level 3}
nipkow@464: {\out ! x. Cons(x, xs) ~= xs}
nipkow@464: {\out No subgoals!}
nipkow@464: \end{ttbox}
nipkow@464: Because both subgoals were proved by almost the same tactic we could have
nipkow@464: done that in one step using
nipkow@464: \begin{ttbox}
nipkow@464: by (ALLGOALS (asm_simp_tac mylist_ss));
nipkow@464: \end{ttbox}
nipkow@464:
nipkow@464:
nipkow@464: \subsubsection{The datatype $\alpha~list$ with mixfix syntax}
nipkow@464:
nipkow@464: In this example we define the type $\alpha~list$ again but this time we want
nipkow@464: to write {\tt []} instead of {\tt Nil} and we want to use the infix operator
nipkow@464: \verb|#| instead of {\tt Cons}. To do this we simply add mixfix annotations
nipkow@464: after the constructor declarations as follows:
nipkow@464: \begin{ttbox}
nipkow@464: MyList = HOL +
nipkow@464: datatype 'a list = "[]" ("[]")
nipkow@464: | "#" ('a, 'a list) (infixr 70)
nipkow@464: end
nipkow@464: \end{ttbox}
nipkow@464: Now the theorem in the previous example can be written \verb|x#xs ~= xs|. The
nipkow@464: proof is the same.
nipkow@464:
nipkow@464:
nipkow@464: \subsubsection{A datatype for weekdays}
nipkow@464:
nipkow@464: This example shows a datatype that consists of more than four constructors:
nipkow@464: \begin{ttbox}
nipkow@464: Days = Arith +
nipkow@464: datatype days = Mo | Tu | We | Th | Fr | Sa | So
nipkow@464: end
nipkow@464: \end{ttbox}
nipkow@464: Because there are more than four constructors, the theory must be based on
nipkow@464: {\tt Arith}. Inequality is defined via a function \verb|days_ord|. Although
nipkow@465: the expression \verb|Mo ~= Tu| is not directly contained in {\tt distinct},
nipkow@465: it can be proved by the simplifier if \verb$arith_ss$ is used:
nipkow@464: \begin{ttbox}
nipkow@464: val days_ss = arith_ss addsimps Days.days.simps;
nipkow@464:
nipkow@464: goal Days.thy "Mo ~= Tu";
nipkow@464: by (simp_tac days_ss 1);
nipkow@464: \end{ttbox}
nipkow@464: Note that usually it is not necessary to derive these inequalities explicitly
nipkow@464: because the simplifier will dispose of them automatically.
nipkow@464:
nipkow@600: \subsection{Primitive recursive functions}
nipkow@600: \index{primitive recursion|(}
nipkow@600: \index{*primrec|(}
nipkow@600:
nipkow@600: Datatypes come with a uniform way of defining functions, {\bf primitive
nipkow@600: recursion}. Although it is possible to define primitive recursive functions
nipkow@600: by asserting their reduction rules as new axioms, e.g.\
nipkow@600: \begin{ttbox}
nipkow@600: Append = MyList +
nipkow@600: consts app :: "['a list,'a list] => 'a list"
nipkow@600: rules
nipkow@600: app_Nil "app([],ys) = ys"
nipkow@600: app_Cons "app(x#xs, ys) = x#app(xs,ys)"
nipkow@600: end
nipkow@600: \end{ttbox}
nipkow@600: this carries with it the danger of accidentally asserting an inconsistency,
nipkow@600: as in \verb$app([],ys) = us$. Therefore primitive recursive functions on
nipkow@600: datatypes can be defined with a special syntax:
nipkow@600: \begin{ttbox}
nipkow@600: Append = MyList +
nipkow@600: consts app :: "['a list,'a list] => 'a list"
nipkow@600: primrec app MyList.list
nipkow@600: app_Nil "app([],ys) = ys"
nipkow@600: app_Cons "app(x#xs, ys) = x#app(xs,ys)"
nipkow@600: end
nipkow@600: \end{ttbox}
nipkow@600: The system will now check that the two rules \verb$app_Nil$ and
nipkow@600: \verb$app_Cons$ do indeed form a primitive recursive definition, thus
nipkow@600: ensuring that consistency is maintained. For example
nipkow@600: \begin{ttbox}
nipkow@600: primrec app MyList.list
nipkow@600: app_Nil "app([],ys) = us"
nipkow@600: \end{ttbox}
nipkow@600: is rejected:
nipkow@600: \begin{ttbox}
nipkow@600: Extra variables on rhs
nipkow@600: \end{ttbox}
nipkow@600: \bigskip
nipkow@600:
nipkow@600: The general form of a primitive recursive definition is
nipkow@600: \begin{ttbox}
nipkow@600: primrec {\it function} {\it type}
nipkow@600: {\it reduction rules}
nipkow@600: \end{ttbox}
nipkow@600: where
nipkow@600: \begin{itemize}
nipkow@600: \item {\it function} is the name of the function, either as an {\it id} or a
nipkow@600: {\it string}. The function must already have been declared.
nipkow@600: \item {\it type} is the name of the datatype, either as an {\it id} or in the
nipkow@600: long form {\it Thy.t}, where {\it Thy} is the name of the parent theory the
nipkow@600: datatype was declared in, and $t$ the name of the datatype. The long form
nipkow@600: is required if the {\tt datatype} and the {\tt primrec} sections are in
nipkow@600: different theories.
nipkow@600: \item {\it reduction rules} specify one or more named equations of the form
nipkow@600: {\it id\/}~{\it string}, where the identifier gives the name of the rule in
nipkow@600: the result structure, and {\it string} is a reduction rule of the form \[
nipkow@600: f(x_1,\dots,x_m,C(y_1,\dots,y_k),z_1,\dots,z_n) = r \] such that $C$ is a
nipkow@600: constructor of the datatype, $r$ contains only the free variables on the
nipkow@600: left-hand side, and all recursive calls in $r$ are of the form
nipkow@600: $f(\dots,y_i,\dots)$ for some $i$. There must be exactly one reduction
nipkow@600: rule for each constructor.
nipkow@600: \end{itemize}
nipkow@600: A theory file may contain any number of {\tt primrec} sections which may be
nipkow@600: intermixed with other declarations.
nipkow@600:
nipkow@600: For the consistency-sensitive user it may be reassuring to know that {\tt
nipkow@600: primrec} does not assert the reduction rules as new axioms but derives them
nipkow@600: as theorems from an explicit definition of the recursive function in terms of
nipkow@600: a recursion operator on the datatype.
nipkow@600:
nipkow@600: The primitive recursive function can also use infix or mixfix syntax:
nipkow@600: \begin{ttbox}
nipkow@600: Append = MyList +
nipkow@600: consts "@" :: "['a list,'a list] => 'a list" (infixr 60)
nipkow@600: primrec "op @" MyList.list
nipkow@600: app_Nil "[] @ ys = ys"
nipkow@600: app_Cons "(x#xs) @ ys = x#(xs @ ys)"
nipkow@600: end
nipkow@600: \end{ttbox}
nipkow@600:
nipkow@600: The reduction rules become part of the ML structure \verb$Append$ and can
nipkow@600: be used to prove theorems about the function:
nipkow@600: \begin{ttbox}
nipkow@600: val append_ss = HOL_ss addsimps [Append.app_Nil,Append.app_Cons];
nipkow@600:
nipkow@600: goal Append.thy "(xs @ ys) @ zs = xs @ (ys @ zs)";
nipkow@600: by (MyList.list.induct_tac "xs" 1);
nipkow@600: by (ALLGOALS(asm_simp_tac append_ss));
nipkow@600: \end{ttbox}
nipkow@600:
nipkow@600: %Note that underdefined primitive recursive functions are allowed:
nipkow@600: %\begin{ttbox}
nipkow@600: %Tl = MyList +
nipkow@600: %consts tl :: "'a list => 'a list"
nipkow@600: %primrec tl MyList.list
nipkow@600: % tl_Cons "tl(x#xs) = xs"
nipkow@600: %end
nipkow@600: %\end{ttbox}
nipkow@600: %Nevertheless {\tt tl} is total, although we do not know what the result of
nipkow@600: %\verb$tl([])$ is.
nipkow@600:
nipkow@600: \index{primitive recursion|)}
nipkow@600: \index{*primrec|)}
lcp@861: \index{*datatype|)}
nipkow@464:
nipkow@464:
lcp@594: \section{Inductive and coinductive definitions}
lcp@594: \index{*inductive|(}
lcp@594: \index{*coinductive|(}
lcp@594:
lcp@594: An {\bf inductive definition} specifies the least set closed under given
lcp@594: rules. For example, a structural operational semantics is an inductive
lcp@594: definition of an evaluation relation. Dually, a {\bf coinductive
paulson@2975: definition} specifies the greatest set consistent with given rules. An
lcp@594: important example is using bisimulation relations to formalize equivalence
lcp@594: of processes and infinite data structures.
lcp@594:
lcp@594: A theory file may contain any number of inductive and coinductive
lcp@594: definitions. They may be intermixed with other declarations; in
lcp@594: particular, the (co)inductive sets {\bf must} be declared separately as
lcp@594: constants, and may have mixfix syntax or be subject to syntax translations.
lcp@594:
lcp@594: Each (co)inductive definition adds definitions to the theory and also
lcp@594: proves some theorems. Each definition creates an ML structure, which is a
lcp@594: substructure of the main theory structure.
lcp@594:
lcp@594: This package is derived from the ZF one, described in a
lcp@594: separate paper,\footnote{It appeared in CADE~\cite{paulson-CADE} and a
lcp@594: longer version is distributed with Isabelle.} which you should refer to
lcp@594: in case of difficulties. The package is simpler than ZF's, thanks to HOL's
lcp@594: automatic type-checking. The type of the (co)inductive determines the
lcp@594: domain of the fixedpoint definition, and the package does not use inference
lcp@594: rules for type-checking.
lcp@594:
lcp@594:
lcp@594: \subsection{The result structure}
lcp@594: Many of the result structure's components have been discussed in the paper;
lcp@594: others are self-explanatory.
lcp@594: \begin{description}
lcp@594: \item[\tt thy] is the new theory containing the recursive sets.
lcp@594:
lcp@594: \item[\tt defs] is the list of definitions of the recursive sets.
lcp@594:
lcp@594: \item[\tt mono] is a monotonicity theorem for the fixedpoint operator.
lcp@594:
lcp@594: \item[\tt unfold] is a fixedpoint equation for the recursive set (the union of
lcp@594: the recursive sets, in the case of mutual recursion).
lcp@594:
lcp@594: \item[\tt intrs] is the list of introduction rules, now proved as theorems, for
lcp@594: the recursive sets. The rules are also available individually, using the
lcp@594: names given them in the theory file.
lcp@594:
lcp@594: \item[\tt elim] is the elimination rule.
lcp@594:
lcp@594: \item[\tt mk\_cases] is a function to create simplified instances of {\tt
lcp@594: elim}, using freeness reasoning on some underlying datatype.
lcp@594: \end{description}
lcp@594:
lcp@594: For an inductive definition, the result structure contains two induction rules,
lcp@594: {\tt induct} and \verb|mutual_induct|. For a coinductive definition, it
lcp@594: contains the rule \verb|coinduct|.
lcp@594:
lcp@594: Figure~\ref{def-result-fig} summarizes the two result signatures,
lcp@594: specifying the types of all these components.
lcp@594:
lcp@594: \begin{figure}
lcp@594: \begin{ttbox}
lcp@594: sig
lcp@594: val thy : theory
lcp@594: val defs : thm list
lcp@594: val mono : thm
lcp@594: val unfold : thm
lcp@594: val intrs : thm list
lcp@594: val elim : thm
lcp@594: val mk_cases : thm list -> string -> thm
lcp@594: {\it(Inductive definitions only)}
lcp@594: val induct : thm
lcp@594: val mutual_induct: thm
lcp@594: {\it(Coinductive definitions only)}
lcp@594: val coinduct : thm
lcp@594: end
lcp@594: \end{ttbox}
lcp@594: \hrule
lcp@594: \caption{The result of a (co)inductive definition} \label{def-result-fig}
lcp@594: \end{figure}
lcp@594:
lcp@594: \subsection{The syntax of a (co)inductive definition}
lcp@594: An inductive definition has the form
lcp@594: \begin{ttbox}
lcp@594: inductive {\it inductive sets}
lcp@594: intrs {\it introduction rules}
lcp@594: monos {\it monotonicity theorems}
lcp@594: con_defs {\it constructor definitions}
lcp@594: \end{ttbox}
lcp@594: A coinductive definition is identical, except that it starts with the keyword
lcp@594: {\tt coinductive}.
lcp@594:
lcp@594: The {\tt monos} and {\tt con\_defs} sections are optional. If present,
lcp@594: each is specified as a string, which must be a valid ML expression of type
lcp@594: {\tt thm list}. It is simply inserted into the {\tt .thy.ML} file; if it
lcp@594: is ill-formed, it will trigger ML error messages. You can then inspect the
lcp@594: file on your directory.
lcp@594:
lcp@594: \begin{itemize}
lcp@594: \item The {\it inductive sets} are specified by one or more strings.
lcp@594:
lcp@594: \item The {\it introduction rules} specify one or more introduction rules in
lcp@594: the form {\it ident\/}~{\it string}, where the identifier gives the name of
lcp@594: the rule in the result structure.
lcp@594:
lcp@594: \item The {\it monotonicity theorems} are required for each operator
lcp@594: applied to a recursive set in the introduction rules. There {\bf must}
lcp@594: be a theorem of the form $A\subseteq B\Imp M(A)\subseteq M(B)$, for each
lcp@594: premise $t\in M(R_i)$ in an introduction rule!
lcp@594:
lcp@594: \item The {\it constructor definitions} contain definitions of constants
lcp@594: appearing in the introduction rules. In most cases it can be omitted.
lcp@594: \end{itemize}
lcp@594:
lcp@594: The package has a few notable restrictions:
lcp@594: \begin{itemize}
lcp@594: \item The theory must separately declare the recursive sets as
lcp@594: constants.
lcp@594:
lcp@594: \item The names of the recursive sets must be identifiers, not infix
lcp@594: operators.
lcp@594:
lcp@594: \item Side-conditions must not be conjunctions. However, an introduction rule
lcp@594: may contain any number of side-conditions.
lcp@594:
lcp@594: \item Side-conditions of the form $x=t$, where the variable~$x$ does not
lcp@594: occur in~$t$, will be substituted through the rule \verb|mutual_induct|.
lcp@594: \end{itemize}
lcp@594:
lcp@594:
lcp@594: \subsection{Example of an inductive definition}
lcp@594: Two declarations, included in a theory file, define the finite powerset
lcp@594: operator. First we declare the constant~{\tt Fin}. Then we declare it
lcp@594: inductively, with two introduction rules:
lcp@594: \begin{ttbox}
lcp@594: consts Fin :: "'a set => 'a set set"
lcp@594: inductive "Fin(A)"
lcp@594: intrs
lcp@594: emptyI "{} : Fin(A)"
lcp@594: insertI "[| a: A; b: Fin(A) |] ==> insert(a,b) : Fin(A)"
lcp@594: \end{ttbox}
lcp@594: The resulting theory structure contains a substructure, called~{\tt Fin}.
lcp@594: It contains the {\tt Fin}$(A)$ introduction rules as the list {\tt Fin.intrs},
lcp@594: and also individually as {\tt Fin.emptyI} and {\tt Fin.consI}. The induction
lcp@594: rule is {\tt Fin.induct}.
lcp@594:
lcp@594: For another example, here is a theory file defining the accessible part of a
lcp@594: relation. The main thing to note is the use of~{\tt Pow} in the sole
lcp@594: introduction rule, and the corresponding mention of the rule
lcp@594: \verb|Pow_mono| in the {\tt monos} list. The paper discusses a ZF version
lcp@594: of this example in more detail.
lcp@594: \begin{ttbox}
lcp@594: Acc = WF +
lcp@594: consts pred :: "['b, ('a * 'b)set] => 'a set" (*Set of predecessors*)
lcp@594: acc :: "('a * 'a)set => 'a set" (*Accessible part*)
lcp@594: defs pred_def "pred(x,r) == {y. :r}"
lcp@594: inductive "acc(r)"
lcp@594: intrs
lcp@594: pred "pred(a,r): Pow(acc(r)) ==> a: acc(r)"
lcp@594: monos "[Pow_mono]"
lcp@594: end
lcp@594: \end{ttbox}
lcp@594: The HOL distribution contains many other inductive definitions, such as the
lcp@594: theory {\tt HOL/ex/PropLog.thy} and the directory {\tt HOL/IMP}. The
lcp@629: theory {\tt HOL/ex/LList.thy} contains coinductive definitions.
lcp@594:
lcp@594: \index{*coinductive|)} \index{*inductive|)} \underscoreoff
lcp@594:
nipkow@464:
lcp@111: \section{The examples directories}
lcp@344: Directory {\tt HOL/Subst} contains Martin Coen's mechanisation of a theory of
lcp@111: substitutions and unifiers. It is based on Paulson's previous
lcp@344: mechanisation in {\LCF}~\cite{paulson85} of Manna and Waldinger's
lcp@111: theory~\cite{mw81}.
lcp@111:
lcp@594: Directory {\tt HOL/IMP} contains a mechanised version of a semantic
lcp@594: equivalence proof taken from Winskel~\cite{winskel93}. It formalises the
lcp@594: denotational and operational semantics of a simple while-language, then
lcp@594: proves the two equivalent. It contains several datatype and inductive
lcp@594: definitions, and demonstrates their use.
lcp@594:
wenzelm@9695: Directory {\tt HOL/ex} contains other examples and experimental proofs in HOL.
wenzelm@9695: Here is an overview of the more interesting files.
lcp@594: \begin{itemize}
lcp@594: \item File {\tt cla.ML} demonstrates the classical reasoner on over sixty
lcp@344: predicate calculus theorems, ranging from simple tautologies to
lcp@344: moderately difficult problems involving equality and quantifiers.
lcp@344:
lcp@594: \item File {\tt meson.ML} contains an experimental implementation of the {\sc
lcp@315: meson} proof procedure, inspired by Plaisted~\cite{plaisted90}. It is
lcp@315: much more powerful than Isabelle's classical reasoner. But it is less
lcp@315: useful in practice because it works only for pure logic; it does not
lcp@315: accept derived rules for the set theory primitives, for example.
lcp@104:
lcp@594: \item File {\tt mesontest.ML} contains test data for the {\sc meson} proof
lcp@315: procedure. These are mostly taken from Pelletier \cite{pelletier86}.
lcp@104:
lcp@594: \item File {\tt set.ML} proves Cantor's Theorem, which is presented in
lcp@315: \S\ref{sec:hol-cantor} below, and the Schr\"oder-Bernstein Theorem.
lcp@104:
lcp@594: \item Theories {\tt InSort} and {\tt Qsort} prove correctness properties of
lcp@594: insertion sort and quick sort.
lcp@104:
lcp@629: \item The definition of lazy lists demonstrates methods for handling
lcp@629: infinite data structures and coinduction in higher-order
lcp@629: logic~\cite{paulson-coind}. Theory \thydx{LList} defines an operator for
lcp@629: corecursion on lazy lists, which is used to define a few simple functions
lcp@629: such as map and append. Corecursion cannot easily define operations such
lcp@629: as filter, which can compute indefinitely before yielding the next
lcp@629: element (if any!) of the lazy list. A coinduction principle is defined
lcp@629: for proving equations on lazy lists.
wenzelm@9695:
wenzelm@9695: \item Theory {\tt PropLog} proves the soundness and completeness of classical
wenzelm@9695: propositional logic, given a truth table semantics. The only connective is
wenzelm@9695: $\imp$. A Hilbert-style axiom system is specified, and its set of theorems
wenzelm@9695: defined inductively. A similar proof in ZF is described
wenzelm@9695: elsewhere~\cite{paulson-set-II}.
lcp@315:
lcp@594: \item Theory {\tt Term} develops an experimental recursive type definition;
lcp@315: the recursion goes through the type constructor~\tydx{list}.
lcp@104:
lcp@594: \item Theory {\tt Simult} constructs mutually recursive sets of trees and
lcp@594: forests, including induction and recursion rules.
lcp@111:
lcp@594: \item Theory {\tt MT} contains Jacob Frost's formalization~\cite{frost93} of
lcp@315: Milner and Tofte's coinduction example~\cite{milner-coind}. This
lcp@315: substantial proof concerns the soundness of a type system for a simple
lcp@315: functional language. The semantics of recursion is given by a cyclic
lcp@315: environment, which makes a coinductive argument appropriate.
lcp@594: \end{itemize}
lcp@104:
lcp@104:
lcp@344: \goodbreak
lcp@315: \section{Example: Cantor's Theorem}\label{sec:hol-cantor}
lcp@104: Cantor's Theorem states that every set has more subsets than it has
lcp@104: elements. It has become a favourite example in higher-order logic since
lcp@104: it is so easily expressed:
lcp@104: \[ \forall f::[\alpha,\alpha]\To bool. \exists S::\alpha\To bool.
lcp@104: \forall x::\alpha. f(x) \not= S
lcp@104: \]
lcp@315: %
lcp@104: Viewing types as sets, $\alpha\To bool$ represents the powerset
lcp@104: of~$\alpha$. This version states that for every function from $\alpha$ to
lcp@344: its powerset, some subset is outside its range.
lcp@344:
wenzelm@9695: The Isabelle proof uses HOL's set theory, with the type $\alpha\,set$ and the
wenzelm@9695: operator \cdx{range}. The set~$S$ is given as an unknown instead of a
lcp@315: quantified variable so that we may inspect the subset found by the proof.
lcp@104: \begin{ttbox}
lcp@104: goal Set.thy "~ ?S : range(f :: 'a=>'a set)";
lcp@104: {\out Level 0}
lcp@104: {\out ~ ?S : range(f)}
lcp@104: {\out 1. ~ ?S : range(f)}
lcp@104: \end{ttbox}
lcp@315: The first two steps are routine. The rule \tdx{rangeE} replaces
lcp@315: $\Var{S}\in {\tt range}(f)$ by $\Var{S}=f(x)$ for some~$x$.
lcp@104: \begin{ttbox}
lcp@104: by (resolve_tac [notI] 1);
lcp@104: {\out Level 1}
lcp@104: {\out ~ ?S : range(f)}
lcp@104: {\out 1. ?S : range(f) ==> False}
lcp@287: \ttbreak
lcp@104: by (eresolve_tac [rangeE] 1);
lcp@104: {\out Level 2}
lcp@104: {\out ~ ?S : range(f)}
lcp@104: {\out 1. !!x. ?S = f(x) ==> False}
lcp@104: \end{ttbox}
lcp@315: Next, we apply \tdx{equalityCE}, reasoning that since $\Var{S}=f(x)$,
lcp@104: we have $\Var{c}\in \Var{S}$ if and only if $\Var{c}\in f(x)$ for
lcp@104: any~$\Var{c}$.
lcp@104: \begin{ttbox}
lcp@104: by (eresolve_tac [equalityCE] 1);
lcp@104: {\out Level 3}
lcp@104: {\out ~ ?S : range(f)}
lcp@104: {\out 1. !!x. [| ?c3(x) : ?S; ?c3(x) : f(x) |] ==> False}
lcp@104: {\out 2. !!x. [| ~ ?c3(x) : ?S; ~ ?c3(x) : f(x) |] ==> False}
lcp@104: \end{ttbox}
lcp@315: Now we use a bit of creativity. Suppose that~$\Var{S}$ has the form of a
lcp@104: comprehension. Then $\Var{c}\in\{x.\Var{P}(x)\}$ implies
lcp@315: $\Var{P}(\Var{c})$. Destruct-resolution using \tdx{CollectD}
lcp@315: instantiates~$\Var{S}$ and creates the new assumption.
lcp@104: \begin{ttbox}
lcp@104: by (dresolve_tac [CollectD] 1);
lcp@104: {\out Level 4}
lcp@104: {\out ~ \{x. ?P7(x)\} : range(f)}
lcp@104: {\out 1. !!x. [| ?c3(x) : f(x); ?P7(?c3(x)) |] ==> False}
lcp@104: {\out 2. !!x. [| ~ ?c3(x) : \{x. ?P7(x)\}; ~ ?c3(x) : f(x) |] ==> False}
lcp@104: \end{ttbox}
lcp@104: Forcing a contradiction between the two assumptions of subgoal~1 completes
lcp@344: the instantiation of~$S$. It is now the set $\{x. x\not\in f(x)\}$, which
lcp@344: is the standard diagonal construction.
lcp@104: \begin{ttbox}
lcp@104: by (contr_tac 1);
lcp@104: {\out Level 5}
lcp@104: {\out ~ \{x. ~ x : f(x)\} : range(f)}
lcp@104: {\out 1. !!x. [| ~ x : \{x. ~ x : f(x)\}; ~ x : f(x) |] ==> False}
lcp@104: \end{ttbox}
lcp@315: The rest should be easy. To apply \tdx{CollectI} to the negated
lcp@104: assumption, we employ \ttindex{swap_res_tac}:
lcp@104: \begin{ttbox}
lcp@104: by (swap_res_tac [CollectI] 1);
lcp@104: {\out Level 6}
lcp@104: {\out ~ \{x. ~ x : f(x)\} : range(f)}
lcp@104: {\out 1. !!x. [| ~ x : f(x); ~ False |] ==> ~ x : f(x)}
lcp@287: \ttbreak
lcp@104: by (assume_tac 1);
lcp@104: {\out Level 7}
lcp@104: {\out ~ \{x. ~ x : f(x)\} : range(f)}
lcp@104: {\out No subgoals!}
lcp@104: \end{ttbox}
lcp@104: How much creativity is required? As it happens, Isabelle can prove this
wenzelm@9695: theorem automatically. The classical set \ttindex{set_cs} contains rules for
wenzelm@9695: most of the constructs of HOL's set theory. We must augment it with
wenzelm@9695: \tdx{equalityCE} to break up set equalities, and then apply best-first search.
wenzelm@9695: Depth-first search would diverge, but best-first search successfully navigates
wenzelm@9695: through the large search space. \index{search!best-first}
lcp@104: \begin{ttbox}
lcp@104: choplev 0;
lcp@104: {\out Level 0}
lcp@104: {\out ~ ?S : range(f)}
lcp@104: {\out 1. ~ ?S : range(f)}
lcp@287: \ttbreak
lcp@104: by (best_tac (set_cs addSEs [equalityCE]) 1);
lcp@104: {\out Level 1}
lcp@104: {\out ~ \{x. ~ x : f(x)\} : range(f)}
lcp@104: {\out No subgoals!}
lcp@104: \end{ttbox}
lcp@315:
lcp@315: \index{higher-order logic|)}