wenzelm@26840: theory HOL_Specific wenzelm@43522: imports Base Main wenzelm@26840: begin wenzelm@26840: wenzelm@26852: chapter {* Isabelle/HOL \label{ch:hol} *} wenzelm@26849: wenzelm@44118: section {* Higher-Order Logic *} wenzelm@44118: wenzelm@44118: text {* Isabelle/HOL is based on Higher-Order Logic, a polymorphic wenzelm@44118: version of Church's Simple Theory of Types. HOL can be best wenzelm@44118: understood as a simply-typed version of classical set theory. The wenzelm@44118: logic was first implemented in Gordon's HOL system wenzelm@44118: \cite{mgordon-hol}. It extends Church's original logic wenzelm@44118: \cite{church40} by explicit type variables (naive polymorphism) and wenzelm@44118: a sound axiomatization scheme for new types based on subsets of wenzelm@44118: existing types. wenzelm@44118: wenzelm@44118: Andrews's book \cite{andrews86} is a full description of the wenzelm@44118: original Church-style higher-order logic, with proofs of correctness wenzelm@44118: and completeness wrt.\ certain set-theoretic interpretations. The wenzelm@44118: particular extensions of Gordon-style HOL are explained semantically wenzelm@44118: in two chapters of the 1993 HOL book \cite{pitts93}. wenzelm@44118: wenzelm@44118: Experience with HOL over decades has demonstrated that higher-order wenzelm@44118: logic is widely applicable in many areas of mathematics and computer wenzelm@44118: science. In a sense, Higher-Order Logic is simpler than First-Order wenzelm@44118: Logic, because there are fewer restrictions and special cases. Note wenzelm@44118: that HOL is \emph{weaker} than FOL with axioms for ZF set theory, wenzelm@44118: which is traditionally considered the standard foundation of regular wenzelm@44118: mathematics, but for most applications this does not matter. If you wenzelm@44118: prefer ML to Lisp, you will probably prefer HOL to ZF. wenzelm@44118: wenzelm@44118: \medskip The syntax of HOL follows @{text "\"}-calculus and wenzelm@44118: functional programming. Function application is curried. To apply wenzelm@44118: the function @{text f} of type @{text "\\<^sub>1 \ \\<^sub>2 \ \\<^sub>3"} to the wenzelm@44118: arguments @{text a} and @{text b} in HOL, you simply write @{text "f wenzelm@44118: a b"} (as in ML or Haskell). There is no ``apply'' operator; the wenzelm@44118: existing application of the Pure @{text "\"}-calculus is re-used. wenzelm@44118: Note that in HOL @{text "f (a, b)"} means ``@{text "f"} applied to wenzelm@44118: the pair @{text "(a, b)"} (which is notation for @{text "Pair a wenzelm@44118: b"}). The latter typically introduces extra formal efforts that can wenzelm@44118: be avoided by currying functions by default. Explicit tuples are as wenzelm@44118: infrequent in HOL formalizations as in good ML or Haskell programs. wenzelm@44118: wenzelm@44118: \medskip Isabelle/HOL has a distinct feel, compared to other wenzelm@44118: object-logics like Isabelle/ZF. It identifies object-level types wenzelm@44118: with meta-level types, taking advantage of the default wenzelm@44118: type-inference mechanism of Isabelle/Pure. HOL fully identifies wenzelm@44118: object-level functions with meta-level functions, with native wenzelm@44118: abstraction and application. wenzelm@44118: wenzelm@44118: These identifications allow Isabelle to support HOL particularly wenzelm@44118: nicely, but they also mean that HOL requires some sophistication wenzelm@44118: from the user. In particular, an understanding of Hindley-Milner wenzelm@44118: type-inference with type-classes, which are both used extensively in wenzelm@44118: the standard libraries and applications. Beginners can set wenzelm@44118: @{attribute show_types} or even @{attribute show_sorts} to get more wenzelm@44118: explicit information about the result of type-inference. *} wenzelm@44118: wenzelm@44118: wenzelm@44112: section {* Inductive and coinductive definitions \label{sec:hol-inductive} *} wenzelm@44112: wenzelm@44117: text {* An \emph{inductive definition} specifies the least predicate wenzelm@44117: or set @{text R} closed under given rules: applying a rule to wenzelm@44117: elements of @{text R} yields a result within @{text R}. For wenzelm@44117: example, a structural operational semantics is an inductive wenzelm@44117: definition of an evaluation relation. wenzelm@44112: wenzelm@44117: Dually, a \emph{coinductive definition} specifies the greatest wenzelm@44117: predicate or set @{text R} that is consistent with given rules: wenzelm@44117: every element of @{text R} can be seen as arising by applying a rule wenzelm@44117: to elements of @{text R}. An important example is using wenzelm@44117: bisimulation relations to formalise equivalence of processes and wenzelm@44117: infinite data structures. wenzelm@44117: wenzelm@44117: Both inductive and coinductive definitions are based on the wenzelm@44117: Knaster-Tarski fixed-point theorem for complete lattices. The wenzelm@44117: collection of introduction rules given by the user determines a wenzelm@44117: functor on subsets of set-theoretic relations. The required wenzelm@44117: monotonicity of the recursion scheme is proven as a prerequisite to wenzelm@44117: the fixed-point definition and the resulting consequences. This wenzelm@44117: works by pushing inclusion through logical connectives and any other wenzelm@44117: operator that might be wrapped around recursive occurrences of the wenzelm@44117: defined relation: there must be a monotonicity theorem of the form wenzelm@44117: @{text "A \ B \ \ A \ \ B"}, for each premise @{text "\ R t"} in an wenzelm@44117: introduction rule. The default rule declarations of Isabelle/HOL wenzelm@44117: already take care of most common situations. wenzelm@44112: wenzelm@44112: \begin{matharray}{rcl} wenzelm@44112: @{command_def (HOL) "inductive"} & : & @{text "local_theory \ local_theory"} \\ wenzelm@44112: @{command_def (HOL) "inductive_set"} & : & @{text "local_theory \ local_theory"} \\ wenzelm@44112: @{command_def (HOL) "coinductive"} & : & @{text "local_theory \ local_theory"} \\ wenzelm@44112: @{command_def (HOL) "coinductive_set"} & : & @{text "local_theory \ local_theory"} \\ wenzelm@44112: @{attribute_def (HOL) mono} & : & @{text attribute} \\ wenzelm@44112: \end{matharray} wenzelm@44112: wenzelm@44112: @{rail " wenzelm@44112: (@@{command (HOL) inductive} | @@{command (HOL) inductive_set} | wenzelm@44112: @@{command (HOL) coinductive} | @@{command (HOL) coinductive_set}) wenzelm@44117: @{syntax target}? \\ wenzelm@44117: @{syntax \"fixes\"} (@'for' @{syntax \"fixes\"})? (@'where' clauses)? \\ wenzelm@44117: (@'monos' @{syntax thmrefs})? wenzelm@44112: ; wenzelm@44112: clauses: (@{syntax thmdecl}? @{syntax prop} + '|') wenzelm@44112: ; wenzelm@44112: @@{attribute (HOL) mono} (() | 'add' | 'del') wenzelm@44112: "} wenzelm@44112: wenzelm@44112: \begin{description} wenzelm@44112: wenzelm@44112: \item @{command (HOL) "inductive"} and @{command (HOL) wenzelm@44117: "coinductive"} define (co)inductive predicates from the introduction wenzelm@44117: rules. wenzelm@44117: wenzelm@44117: The propositions given as @{text "clauses"} in the @{keyword wenzelm@44117: "where"} part are either rules of the usual @{text "\/\"} format wenzelm@44117: (with arbitrary nesting), or equalities using @{text "\"}. The wenzelm@44117: latter specifies extra-logical abbreviations in the sense of wenzelm@44117: @{command_ref abbreviation}. Introducing abstract syntax wenzelm@44117: simultaneously with the actual introduction rules is occasionally wenzelm@44117: useful for complex specifications. wenzelm@44117: wenzelm@44117: The optional @{keyword "for"} part contains a list of parameters of wenzelm@44117: the (co)inductive predicates that remain fixed throughout the wenzelm@44117: definition, in contrast to arguments of the relation that may vary wenzelm@44117: in each occurrence within the given @{text "clauses"}. wenzelm@44117: wenzelm@44117: The optional @{keyword "monos"} declaration contains additional wenzelm@44112: \emph{monotonicity theorems}, which are required for each operator wenzelm@44117: applied to a recursive set in the introduction rules. wenzelm@44112: wenzelm@44112: \item @{command (HOL) "inductive_set"} and @{command (HOL) wenzelm@44117: "coinductive_set"} are wrappers for to the previous commands for wenzelm@44117: native HOL predicates. This allows to define (co)inductive sets, wenzelm@44117: where multiple arguments are simulated via tuples. wenzelm@44112: wenzelm@44117: \item @{attribute (HOL) mono} declares monotonicity rules in the wenzelm@44117: context. These rule are involved in the automated monotonicity wenzelm@44117: proof of the above inductive and coinductive definitions. wenzelm@44112: wenzelm@44112: \end{description} wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@44112: subsection {* Derived rules *} wenzelm@44112: wenzelm@44117: text {* A (co)inductive definition of @{text R} provides the following wenzelm@44117: main theorems: wenzelm@44112: wenzelm@44112: \begin{description} wenzelm@44112: wenzelm@44112: \item @{text R.intros} is the list of introduction rules as proven wenzelm@44112: theorems, for the recursive predicates (or sets). The rules are wenzelm@44112: also available individually, using the names given them in the wenzelm@44112: theory file; wenzelm@44112: wenzelm@44112: \item @{text R.cases} is the case analysis (or elimination) rule; wenzelm@44112: wenzelm@44112: \item @{text R.induct} or @{text R.coinduct} is the (co)induction wenzelm@44112: rule. wenzelm@44112: wenzelm@44112: \end{description} wenzelm@44112: wenzelm@44112: When several predicates @{text "R\<^sub>1, \, R\<^sub>n"} are wenzelm@44112: defined simultaneously, the list of introduction rules is called wenzelm@44112: @{text "R\<^sub>1_\_R\<^sub>n.intros"}, the case analysis rules are wenzelm@44112: called @{text "R\<^sub>1.cases, \, R\<^sub>n.cases"}, and the list wenzelm@44112: of mutual induction rules is called @{text wenzelm@44112: "R\<^sub>1_\_R\<^sub>n.inducts"}. wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@44112: subsection {* Monotonicity theorems *} wenzelm@44112: wenzelm@44117: text {* The context maintains a default set of theorems that are used wenzelm@44117: in monotonicity proofs. New rules can be declared via the wenzelm@44117: @{attribute (HOL) mono} attribute. See the main Isabelle/HOL wenzelm@44117: sources for some examples. The general format of such monotonicity wenzelm@44117: theorems is as follows: wenzelm@44112: wenzelm@44112: \begin{itemize} wenzelm@44112: wenzelm@44117: \item Theorems of the form @{text "A \ B \ \ A \ \ B"}, for proving wenzelm@44112: monotonicity of inductive definitions whose introduction rules have wenzelm@44117: premises involving terms such as @{text "\ R t"}. wenzelm@44112: wenzelm@44112: \item Monotonicity theorems for logical operators, which are of the wenzelm@44112: general form @{text "(\ \ \) \ \ (\ \ \) \ \ \ \"}. For example, in wenzelm@44112: the case of the operator @{text "\"}, the corresponding theorem is wenzelm@44112: \[ wenzelm@44112: \infer{@{text "P\<^sub>1 \ P\<^sub>2 \ Q\<^sub>1 \ Q\<^sub>2"}}{@{text "P\<^sub>1 \ Q\<^sub>1"} & @{text "P\<^sub>2 \ Q\<^sub>2"}} wenzelm@44112: \] wenzelm@44112: wenzelm@44112: \item De Morgan style equations for reasoning about the ``polarity'' wenzelm@44112: of expressions, e.g. wenzelm@44112: \[ wenzelm@44112: @{prop "\ \ P \ P"} \qquad\qquad wenzelm@44112: @{prop "\ (P \ Q) \ \ P \ \ Q"} wenzelm@44112: \] wenzelm@44112: wenzelm@44112: \item Equations for reducing complex operators to more primitive wenzelm@44112: ones whose monotonicity can easily be proved, e.g. wenzelm@44112: \[ wenzelm@44112: @{prop "(P \ Q) \ \ P \ Q"} \qquad\qquad wenzelm@44112: @{prop "Ball A P \ \x. x \ A \ P x"} wenzelm@44112: \] wenzelm@44112: wenzelm@44112: \end{itemize} wenzelm@44117: *} wenzelm@44112: wenzelm@44117: subsubsection {* Examples *} wenzelm@44117: wenzelm@44117: text {* The finite powerset operator can be defined inductively like this: *} wenzelm@44117: wenzelm@44117: inductive_set Fin :: "'a set \ 'a set set" for A :: "'a set" wenzelm@44117: where wenzelm@44117: empty: "{} \ Fin A" wenzelm@44117: | insert: "a \ A \ B \ Fin A \ insert a B \ Fin A" wenzelm@44117: wenzelm@44117: text {* The accessible part of a relation is defined as follows: *} wenzelm@44117: wenzelm@44117: inductive acc :: "('a \ 'a \ bool) \ 'a \ bool" wenzelm@44117: for r :: "'a \ 'a \ bool" (infix "\" 50) wenzelm@44117: where acc: "(\y. y \ x \ acc r y) \ acc r x" wenzelm@44117: wenzelm@44117: text {* Common logical connectives can be easily characterized as wenzelm@44117: non-recursive inductive definitions with parameters, but without wenzelm@44117: arguments. *} wenzelm@44117: wenzelm@44117: inductive AND for A B :: bool wenzelm@44117: where "A \ B \ AND A B" wenzelm@44117: wenzelm@44117: inductive OR for A B :: bool wenzelm@44117: where "A \ OR A B" wenzelm@44117: | "B \ OR A B" wenzelm@44117: wenzelm@44117: inductive EXISTS for B :: "'a \ bool" wenzelm@44117: where "B a \ EXISTS B" wenzelm@44117: wenzelm@44117: text {* Here the @{text "cases"} or @{text "induct"} rules produced by wenzelm@44117: the @{command inductive} package coincide with the expected wenzelm@44117: elimination rules for Natural Deduction. Already in the original wenzelm@44117: article by Gerhard Gentzen \cite{Gentzen:1935} there is a hint that wenzelm@44117: each connective can be characterized by its introductions, and the wenzelm@44117: elimination can be constructed systematically. *} wenzelm@44112: wenzelm@44112: wenzelm@44112: section {* Recursive functions \label{sec:recursion} *} wenzelm@44112: wenzelm@44112: text {* wenzelm@44112: \begin{matharray}{rcl} wenzelm@44112: @{command_def (HOL) "primrec"} & : & @{text "local_theory \ local_theory"} \\ wenzelm@44112: @{command_def (HOL) "fun"} & : & @{text "local_theory \ local_theory"} \\ wenzelm@44112: @{command_def (HOL) "function"} & : & @{text "local_theory \ proof(prove)"} \\ wenzelm@44112: @{command_def (HOL) "termination"} & : & @{text "local_theory \ proof(prove)"} \\ wenzelm@44112: \end{matharray} wenzelm@44112: wenzelm@44112: @{rail " wenzelm@44112: @@{command (HOL) primrec} @{syntax target}? @{syntax \"fixes\"} @'where' equations wenzelm@44112: ; wenzelm@44112: (@@{command (HOL) fun} | @@{command (HOL) function}) @{syntax target}? functionopts? wenzelm@44112: @{syntax \"fixes\"} \\ @'where' equations wenzelm@44112: ; wenzelm@44112: wenzelm@44112: equations: (@{syntax thmdecl}? @{syntax prop} + '|') wenzelm@44112: ; wenzelm@44112: functionopts: '(' (('sequential' | 'domintros') + ',') ')' wenzelm@44112: ; wenzelm@44112: @@{command (HOL) termination} @{syntax term}? wenzelm@44112: "} wenzelm@44112: wenzelm@44112: \begin{description} wenzelm@44112: wenzelm@44112: \item @{command (HOL) "primrec"} defines primitive recursive wenzelm@44116: functions over datatypes (see also @{command_ref (HOL) datatype} and wenzelm@44116: @{command_ref (HOL) rep_datatype}). The given @{text equations} wenzelm@44116: specify reduction rules that are produced by instantiating the wenzelm@44116: generic combinator for primitive recursion that is available for wenzelm@44116: each datatype. wenzelm@44116: wenzelm@44116: Each equation needs to be of the form: wenzelm@44116: wenzelm@44116: @{text [display] "f x\<^sub>1 \ x\<^sub>m (C y\<^sub>1 \ y\<^sub>k) z\<^sub>1 \ z\<^sub>n = rhs"} wenzelm@44116: wenzelm@44116: such that @{text C} is a datatype constructor, @{text rhs} contains wenzelm@44116: only the free variables on the left-hand side (or from the context), wenzelm@44116: and all recursive occurrences of @{text "f"} in @{text "rhs"} are of wenzelm@44116: the form @{text "f \ y\<^sub>i \"} for some @{text i}. At most one wenzelm@44116: reduction rule for each constructor can be given. The order does wenzelm@44116: not matter. For missing constructors, the function is defined to wenzelm@44116: return a default value, but this equation is made difficult to wenzelm@44116: access for users. wenzelm@44116: wenzelm@44116: The reduction rules are declared as @{attribute simp} by default, wenzelm@44116: which enables standard proof methods like @{method simp} and wenzelm@44116: @{method auto} to normalize expressions of @{text "f"} applied to wenzelm@44116: datatype constructions, by simulating symbolic computation via wenzelm@44116: rewriting. wenzelm@44112: wenzelm@44112: \item @{command (HOL) "function"} defines functions by general wenzelm@44112: wellfounded recursion. A detailed description with examples can be wenzelm@44112: found in \cite{isabelle-function}. The function is specified by a wenzelm@44112: set of (possibly conditional) recursive equations with arbitrary wenzelm@44112: pattern matching. The command generates proof obligations for the wenzelm@44112: completeness and the compatibility of patterns. wenzelm@44112: wenzelm@44112: The defined function is considered partial, and the resulting wenzelm@44112: simplification rules (named @{text "f.psimps"}) and induction rule wenzelm@44112: (named @{text "f.pinduct"}) are guarded by a generated domain wenzelm@44112: predicate @{text "f_dom"}. The @{command (HOL) "termination"} wenzelm@44112: command can then be used to establish that the function is total. wenzelm@44112: wenzelm@44112: \item @{command (HOL) "fun"} is a shorthand notation for ``@{command wenzelm@44112: (HOL) "function"}~@{text "(sequential)"}, followed by automated wenzelm@44112: proof attempts regarding pattern matching and termination. See wenzelm@44112: \cite{isabelle-function} for further details. wenzelm@44112: wenzelm@44112: \item @{command (HOL) "termination"}~@{text f} commences a wenzelm@44112: termination proof for the previously defined function @{text f}. If wenzelm@44112: this is omitted, the command refers to the most recent function wenzelm@44112: definition. After the proof is closed, the recursive equations and wenzelm@44112: the induction principle is established. wenzelm@44112: wenzelm@44112: \end{description} wenzelm@44112: wenzelm@44112: Recursive definitions introduced by the @{command (HOL) "function"} wenzelm@44116: command accommodate reasoning by induction (cf.\ @{method induct}): wenzelm@44116: rule @{text "f.induct"} refers to a specific induction rule, with wenzelm@44116: parameters named according to the user-specified equations. Cases wenzelm@44116: are numbered starting from 1. For @{command (HOL) "primrec"}, the wenzelm@44116: induction principle coincides with structural recursion on the wenzelm@44116: datatype where the recursion is carried out. wenzelm@44112: wenzelm@44112: The equations provided by these packages may be referred later as wenzelm@44112: theorem list @{text "f.simps"}, where @{text f} is the (collective) wenzelm@44112: name of the functions defined. Individual equations may be named wenzelm@44112: explicitly as well. wenzelm@44112: wenzelm@44112: The @{command (HOL) "function"} command accepts the following wenzelm@44112: options. wenzelm@44112: wenzelm@44112: \begin{description} wenzelm@44112: wenzelm@44112: \item @{text sequential} enables a preprocessor which disambiguates wenzelm@44112: overlapping patterns by making them mutually disjoint. Earlier wenzelm@44112: equations take precedence over later ones. This allows to give the wenzelm@44112: specification in a format very similar to functional programming. wenzelm@44112: Note that the resulting simplification and induction rules wenzelm@44112: correspond to the transformed specification, not the one given wenzelm@44112: originally. This usually means that each equation given by the user wenzelm@44112: may result in several theorems. Also note that this automatic wenzelm@44112: transformation only works for ML-style datatype patterns. wenzelm@44112: wenzelm@44112: \item @{text domintros} enables the automated generation of wenzelm@44112: introduction rules for the domain predicate. While mostly not wenzelm@44112: needed, they can be helpful in some proofs about partial functions. wenzelm@44112: wenzelm@44112: \end{description} wenzelm@44112: *} wenzelm@44112: wenzelm@44116: subsubsection {* Example: evaluation of expressions *} wenzelm@44116: wenzelm@44116: text {* Subsequently, we define mutual datatypes for arithmetic and wenzelm@44116: boolean expressions, and use @{command primrec} for evaluation wenzelm@44116: functions that follow the same recursive structure. *} wenzelm@44116: wenzelm@44116: datatype 'a aexp = wenzelm@44116: IF "'a bexp" "'a aexp" "'a aexp" wenzelm@44116: | Sum "'a aexp" "'a aexp" wenzelm@44116: | Diff "'a aexp" "'a aexp" wenzelm@44116: | Var 'a wenzelm@44116: | Num nat wenzelm@44116: and 'a bexp = wenzelm@44116: Less "'a aexp" "'a aexp" wenzelm@44116: | And "'a bexp" "'a bexp" wenzelm@44116: | Neg "'a bexp" wenzelm@44116: wenzelm@44116: wenzelm@44116: text {* \medskip Evaluation of arithmetic and boolean expressions *} wenzelm@44116: wenzelm@44116: primrec evala :: "('a \ nat) \ 'a aexp \ nat" wenzelm@44116: and evalb :: "('a \ nat) \ 'a bexp \ bool" wenzelm@44116: where wenzelm@44116: "evala env (IF b a1 a2) = (if evalb env b then evala env a1 else evala env a2)" wenzelm@44116: | "evala env (Sum a1 a2) = evala env a1 + evala env a2" wenzelm@44116: | "evala env (Diff a1 a2) = evala env a1 - evala env a2" wenzelm@44116: | "evala env (Var v) = env v" wenzelm@44116: | "evala env (Num n) = n" wenzelm@44116: | "evalb env (Less a1 a2) = (evala env a1 < evala env a2)" wenzelm@44116: | "evalb env (And b1 b2) = (evalb env b1 \ evalb env b2)" wenzelm@44116: | "evalb env (Neg b) = (\ evalb env b)" wenzelm@44116: wenzelm@44116: text {* Since the value of an expression depends on the value of its wenzelm@44116: variables, the functions @{const evala} and @{const evalb} take an wenzelm@44116: additional parameter, an \emph{environment} that maps variables to wenzelm@44116: their values. wenzelm@44116: wenzelm@44116: \medskip Substitution on expressions can be defined similarly. The wenzelm@44116: mapping @{text f} of type @{typ "'a \ 'a aexp"} given as a wenzelm@44116: parameter is lifted canonically on the types @{typ "'a aexp"} and wenzelm@44116: @{typ "'a bexp"}, respectively. wenzelm@44116: *} wenzelm@44116: wenzelm@44116: primrec substa :: "('a \ 'b aexp) \ 'a aexp \ 'b aexp" wenzelm@44116: and substb :: "('a \ 'b aexp) \ 'a bexp \ 'b bexp" wenzelm@44116: where wenzelm@44116: "substa f (IF b a1 a2) = IF (substb f b) (substa f a1) (substa f a2)" wenzelm@44116: | "substa f (Sum a1 a2) = Sum (substa f a1) (substa f a2)" wenzelm@44116: | "substa f (Diff a1 a2) = Diff (substa f a1) (substa f a2)" wenzelm@44116: | "substa f (Var v) = f v" wenzelm@44116: | "substa f (Num n) = Num n" wenzelm@44116: | "substb f (Less a1 a2) = Less (substa f a1) (substa f a2)" wenzelm@44116: | "substb f (And b1 b2) = And (substb f b1) (substb f b2)" wenzelm@44116: | "substb f (Neg b) = Neg (substb f b)" wenzelm@44116: wenzelm@44116: text {* In textbooks about semantics one often finds substitution wenzelm@44116: theorems, which express the relationship between substitution and wenzelm@44116: evaluation. For @{typ "'a aexp"} and @{typ "'a bexp"}, we can prove wenzelm@44116: such a theorem by mutual induction, followed by simplification. wenzelm@44116: *} wenzelm@44116: wenzelm@44116: lemma subst_one: wenzelm@44116: "evala env (substa (Var (v := a')) a) = evala (env (v := evala env a')) a" wenzelm@44116: "evalb env (substb (Var (v := a')) b) = evalb (env (v := evala env a')) b" wenzelm@44116: by (induct a and b) simp_all wenzelm@44116: wenzelm@44116: lemma subst_all: wenzelm@44116: "evala env (substa s a) = evala (\x. evala env (s x)) a" wenzelm@44116: "evalb env (substb s b) = evalb (\x. evala env (s x)) b" wenzelm@44116: by (induct a and b) simp_all wenzelm@44116: wenzelm@44116: wenzelm@44116: subsubsection {* Example: a substitution function for terms *} wenzelm@44116: wenzelm@44116: text {* Functions on datatypes with nested recursion are also defined wenzelm@44116: by mutual primitive recursion. *} wenzelm@44116: wenzelm@44116: datatype ('a, 'b) "term" = Var 'a | App 'b "('a, 'b) term list" wenzelm@44116: wenzelm@44116: text {* A substitution function on type @{typ "('a, 'b) term"} can be wenzelm@44116: defined as follows, by working simultaneously on @{typ "('a, 'b) wenzelm@44116: term list"}: *} wenzelm@44116: wenzelm@44116: primrec subst_term :: "('a \ ('a, 'b) term) \ ('a, 'b) term \ ('a, 'b) term" and wenzelm@44116: subst_term_list :: "('a \ ('a, 'b) term) \ ('a, 'b) term list \ ('a, 'b) term list" wenzelm@44116: where wenzelm@44116: "subst_term f (Var a) = f a" wenzelm@44116: | "subst_term f (App b ts) = App b (subst_term_list f ts)" wenzelm@44116: | "subst_term_list f [] = []" wenzelm@44116: | "subst_term_list f (t # ts) = subst_term f t # subst_term_list f ts" wenzelm@44116: wenzelm@44116: text {* The recursion scheme follows the structure of the unfolded wenzelm@44116: definition of type @{typ "('a, 'b) term"}. To prove properties of this wenzelm@44116: substitution function, mutual induction is needed: wenzelm@44116: *} wenzelm@44116: wenzelm@44116: lemma "subst_term (subst_term f1 \ f2) t = subst_term f1 (subst_term f2 t)" and wenzelm@44116: "subst_term_list (subst_term f1 \ f2) ts = subst_term_list f1 (subst_term_list f2 ts)" wenzelm@44116: by (induct t and ts) simp_all wenzelm@44116: wenzelm@44116: wenzelm@44116: subsubsection {* Example: a map function for infinitely branching trees *} wenzelm@44116: wenzelm@44116: text {* Defining functions on infinitely branching datatypes by wenzelm@44116: primitive recursion is just as easy. wenzelm@44116: *} wenzelm@44116: wenzelm@44116: datatype 'a tree = Atom 'a | Branch "nat \ 'a tree" wenzelm@44116: wenzelm@44116: primrec map_tree :: "('a \ 'b) \ 'a tree \ 'b tree" wenzelm@44116: where wenzelm@44116: "map_tree f (Atom a) = Atom (f a)" wenzelm@44116: | "map_tree f (Branch ts) = Branch (\x. map_tree f (ts x))" wenzelm@44116: wenzelm@44116: text {* Note that all occurrences of functions such as @{text ts} wenzelm@44116: above must be applied to an argument. In particular, @{term wenzelm@44116: "map_tree f \ ts"} is not allowed here. *} wenzelm@44116: wenzelm@44116: text {* Here is a simple composition lemma for @{term map_tree}: *} wenzelm@44116: wenzelm@44116: lemma "map_tree g (map_tree f t) = map_tree (g \ f) t" wenzelm@44116: by (induct t) simp_all wenzelm@44116: wenzelm@44112: wenzelm@44112: subsection {* Proof methods related to recursive definitions *} wenzelm@44112: wenzelm@44112: text {* wenzelm@44112: \begin{matharray}{rcl} wenzelm@44112: @{method_def (HOL) pat_completeness} & : & @{text method} \\ wenzelm@44112: @{method_def (HOL) relation} & : & @{text method} \\ wenzelm@44112: @{method_def (HOL) lexicographic_order} & : & @{text method} \\ wenzelm@44112: @{method_def (HOL) size_change} & : & @{text method} \\ wenzelm@44112: \end{matharray} wenzelm@44112: wenzelm@44112: @{rail " wenzelm@44112: @@{method (HOL) relation} @{syntax term} wenzelm@44112: ; wenzelm@44112: @@{method (HOL) lexicographic_order} (@{syntax clasimpmod} * ) wenzelm@44112: ; wenzelm@44112: @@{method (HOL) size_change} ( orders (@{syntax clasimpmod} * ) ) wenzelm@44112: ; wenzelm@44112: orders: ( 'max' | 'min' | 'ms' ) * wenzelm@44112: "} wenzelm@44112: wenzelm@44112: \begin{description} wenzelm@44112: wenzelm@44112: \item @{method (HOL) pat_completeness} is a specialized method to wenzelm@44112: solve goals regarding the completeness of pattern matching, as wenzelm@44112: required by the @{command (HOL) "function"} package (cf.\ wenzelm@44112: \cite{isabelle-function}). wenzelm@44112: wenzelm@44112: \item @{method (HOL) relation}~@{text R} introduces a termination wenzelm@44112: proof using the relation @{text R}. The resulting proof state will wenzelm@44112: contain goals expressing that @{text R} is wellfounded, and that the wenzelm@44112: arguments of recursive calls decrease with respect to @{text R}. wenzelm@44112: Usually, this method is used as the initial proof step of manual wenzelm@44112: termination proofs. wenzelm@44112: wenzelm@44112: \item @{method (HOL) "lexicographic_order"} attempts a fully wenzelm@44112: automated termination proof by searching for a lexicographic wenzelm@44112: combination of size measures on the arguments of the function. The wenzelm@44112: method accepts the same arguments as the @{method auto} method, wenzelm@44134: which it uses internally to prove local descents. The @{syntax wenzelm@44134: clasimpmod} modifiers are accepted (as for @{method auto}). wenzelm@44112: wenzelm@44112: In case of failure, extensive information is printed, which can help wenzelm@44112: to analyse the situation (cf.\ \cite{isabelle-function}). wenzelm@44112: wenzelm@44112: \item @{method (HOL) "size_change"} also works on termination goals, wenzelm@44112: using a variation of the size-change principle, together with a wenzelm@44112: graph decomposition technique (see \cite{krauss_phd} for details). wenzelm@44112: Three kinds of orders are used internally: @{text max}, @{text min}, wenzelm@44112: and @{text ms} (multiset), which is only available when the theory wenzelm@44112: @{text Multiset} is loaded. When no order kinds are given, they are wenzelm@44112: tried in order. The search for a termination proof uses SAT solving wenzelm@44112: internally. wenzelm@44112: wenzelm@44134: For local descent proofs, the @{syntax clasimpmod} modifiers are wenzelm@44134: accepted (as for @{method auto}). wenzelm@44112: wenzelm@44112: \end{description} wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@44112: subsection {* Functions with explicit partiality *} wenzelm@44112: wenzelm@44112: text {* wenzelm@44112: \begin{matharray}{rcl} wenzelm@44112: @{command_def (HOL) "partial_function"} & : & @{text "local_theory \ local_theory"} \\ wenzelm@44112: @{attribute_def (HOL) "partial_function_mono"} & : & @{text attribute} \\ wenzelm@44112: \end{matharray} wenzelm@44112: wenzelm@44112: @{rail " wenzelm@44112: @@{command (HOL) partial_function} @{syntax target}? wenzelm@44112: '(' @{syntax nameref} ')' @{syntax \"fixes\"} \\ wenzelm@44112: @'where' @{syntax thmdecl}? @{syntax prop} wenzelm@44112: "} wenzelm@44112: wenzelm@44112: \begin{description} wenzelm@44112: wenzelm@44112: \item @{command (HOL) "partial_function"}~@{text "(mode)"} defines wenzelm@44112: recursive functions based on fixpoints in complete partial wenzelm@44112: orders. No termination proof is required from the user or wenzelm@44112: constructed internally. Instead, the possibility of non-termination wenzelm@44112: is modelled explicitly in the result type, which contains an wenzelm@44112: explicit bottom element. wenzelm@44112: wenzelm@44112: Pattern matching and mutual recursion are currently not supported. wenzelm@44112: Thus, the specification consists of a single function described by a wenzelm@44112: single recursive equation. wenzelm@44112: wenzelm@44112: There are no fixed syntactic restrictions on the body of the wenzelm@44112: function, but the induced functional must be provably monotonic wenzelm@44112: wrt.\ the underlying order. The monotonicitity proof is performed wenzelm@44112: internally, and the definition is rejected when it fails. The proof wenzelm@44112: can be influenced by declaring hints using the wenzelm@44112: @{attribute (HOL) partial_function_mono} attribute. wenzelm@44112: wenzelm@44112: The mandatory @{text mode} argument specifies the mode of operation wenzelm@44112: of the command, which directly corresponds to a complete partial wenzelm@44112: order on the result type. By default, the following modes are wenzelm@44112: defined: wenzelm@44112: wenzelm@44112: \begin{description} wenzelm@44112: \item @{text option} defines functions that map into the @{type wenzelm@44112: option} type. Here, the value @{term None} is used to model a wenzelm@44112: non-terminating computation. Monotonicity requires that if @{term wenzelm@44112: None} is returned by a recursive call, then the overall result wenzelm@44112: must also be @{term None}. This is best achieved through the use of wenzelm@44112: the monadic operator @{const "Option.bind"}. wenzelm@44112: wenzelm@44112: \item @{text tailrec} defines functions with an arbitrary result wenzelm@44112: type and uses the slightly degenerated partial order where @{term wenzelm@44112: "undefined"} is the bottom element. Now, monotonicity requires that wenzelm@44112: if @{term undefined} is returned by a recursive call, then the wenzelm@44112: overall result must also be @{term undefined}. In practice, this is wenzelm@44112: only satisfied when each recursive call is a tail call, whose result wenzelm@44112: is directly returned. Thus, this mode of operation allows the wenzelm@44112: definition of arbitrary tail-recursive functions. wenzelm@44112: \end{description} wenzelm@44112: wenzelm@44112: Experienced users may define new modes by instantiating the locale wenzelm@44112: @{const "partial_function_definitions"} appropriately. wenzelm@44112: wenzelm@44112: \item @{attribute (HOL) partial_function_mono} declares rules for wenzelm@44112: use in the internal monononicity proofs of partial function wenzelm@44112: definitions. wenzelm@44112: wenzelm@44112: \end{description} wenzelm@44112: wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@44112: subsection {* Old-style recursive function definitions (TFL) *} wenzelm@44112: wenzelm@44112: text {* wenzelm@44112: The old TFL commands @{command (HOL) "recdef"} and @{command (HOL) wenzelm@44112: "recdef_tc"} for defining recursive are mostly obsolete; @{command wenzelm@44112: (HOL) "function"} or @{command (HOL) "fun"} should be used instead. wenzelm@44112: wenzelm@44112: \begin{matharray}{rcl} wenzelm@44112: @{command_def (HOL) "recdef"} & : & @{text "theory \ theory)"} \\ wenzelm@44112: @{command_def (HOL) "recdef_tc"}@{text "\<^sup>*"} & : & @{text "theory \ proof(prove)"} \\ wenzelm@44112: \end{matharray} wenzelm@44112: wenzelm@44112: @{rail " wenzelm@44112: @@{command (HOL) recdef} ('(' @'permissive' ')')? \\ wenzelm@44112: @{syntax name} @{syntax term} (@{syntax prop} +) hints? wenzelm@44112: ; wenzelm@44112: recdeftc @{syntax thmdecl}? tc wenzelm@44112: ; wenzelm@44112: hints: '(' @'hints' ( recdefmod * ) ')' wenzelm@44112: ; wenzelm@44112: recdefmod: (('recdef_simp' | 'recdef_cong' | 'recdef_wf') wenzelm@44112: (() | 'add' | 'del') ':' @{syntax thmrefs}) | @{syntax clasimpmod} wenzelm@44112: ; wenzelm@44112: tc: @{syntax nameref} ('(' @{syntax nat} ')')? wenzelm@44112: "} wenzelm@44112: wenzelm@44112: \begin{description} wenzelm@44112: wenzelm@44112: \item @{command (HOL) "recdef"} defines general well-founded wenzelm@44112: recursive functions (using the TFL package), see also wenzelm@44112: \cite{isabelle-HOL}. The ``@{text "(permissive)"}'' option tells wenzelm@44112: TFL to recover from failed proof attempts, returning unfinished wenzelm@44112: results. The @{text recdef_simp}, @{text recdef_cong}, and @{text wenzelm@44112: recdef_wf} hints refer to auxiliary rules to be used in the internal wenzelm@44112: automated proof process of TFL. Additional @{syntax clasimpmod} wenzelm@44134: declarations may be given to tune the context of the Simplifier wenzelm@44134: (cf.\ \secref{sec:simplifier}) and Classical reasoner (cf.\ wenzelm@44134: \secref{sec:classical}). wenzelm@44112: wenzelm@44112: \item @{command (HOL) "recdef_tc"}~@{text "c (i)"} recommences the wenzelm@44112: proof for leftover termination condition number @{text i} (default wenzelm@44112: 1) as generated by a @{command (HOL) "recdef"} definition of wenzelm@44112: constant @{text c}. wenzelm@44112: wenzelm@44112: Note that in most cases, @{command (HOL) "recdef"} is able to finish wenzelm@44112: its internal proofs without manual intervention. wenzelm@44112: wenzelm@44112: \end{description} wenzelm@44112: wenzelm@44112: \medskip Hints for @{command (HOL) "recdef"} may be also declared wenzelm@44112: globally, using the following attributes. wenzelm@44112: wenzelm@44112: \begin{matharray}{rcl} wenzelm@44112: @{attribute_def (HOL) recdef_simp} & : & @{text attribute} \\ wenzelm@44112: @{attribute_def (HOL) recdef_cong} & : & @{text attribute} \\ wenzelm@44112: @{attribute_def (HOL) recdef_wf} & : & @{text attribute} \\ wenzelm@44112: \end{matharray} wenzelm@44112: wenzelm@44112: @{rail " wenzelm@44112: (@@{attribute (HOL) recdef_simp} | @@{attribute (HOL) recdef_cong} | wenzelm@44112: @@{attribute (HOL) recdef_wf}) (() | 'add' | 'del') wenzelm@44112: "} wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@44112: section {* Datatypes \label{sec:hol-datatype} *} wenzelm@44112: wenzelm@44112: text {* wenzelm@44112: \begin{matharray}{rcl} wenzelm@44112: @{command_def (HOL) "datatype"} & : & @{text "theory \ theory"} \\ wenzelm@44112: @{command_def (HOL) "rep_datatype"} & : & @{text "theory \ proof(prove)"} \\ wenzelm@44112: \end{matharray} wenzelm@44112: wenzelm@44112: @{rail " wenzelm@44112: @@{command (HOL) datatype} (spec + @'and') wenzelm@44112: ; wenzelm@44112: @@{command (HOL) rep_datatype} ('(' (@{syntax name} +) ')')? (@{syntax term} +) wenzelm@44112: ; wenzelm@44112: wenzelm@44112: spec: @{syntax parname}? @{syntax typespec} @{syntax mixfix}? '=' (cons + '|') wenzelm@44112: ; wenzelm@44112: cons: @{syntax name} (@{syntax type} * ) @{syntax mixfix}? wenzelm@44112: "} wenzelm@44112: wenzelm@44112: \begin{description} wenzelm@44112: wenzelm@44112: \item @{command (HOL) "datatype"} defines inductive datatypes in wenzelm@44112: HOL. wenzelm@44112: wenzelm@44112: \item @{command (HOL) "rep_datatype"} represents existing types as wenzelm@44113: datatypes. wenzelm@44113: wenzelm@44113: For foundational reasons, some basic types such as @{typ nat}, @{typ wenzelm@44113: "'a \ 'b"}, @{typ "'a + 'b"}, @{typ bool} and @{typ unit} are wenzelm@44113: introduced by more primitive means using @{command_ref typedef}. To wenzelm@44113: recover the rich infrastructure of @{command datatype} (e.g.\ rules wenzelm@44113: for @{method cases} and @{method induct} and the primitive recursion wenzelm@44113: combinators), such types may be represented as actual datatypes wenzelm@44113: later. This is done by specifying the constructors of the desired wenzelm@44113: type, and giving a proof of the induction rule, distinctness and wenzelm@44113: injectivity of constructors. wenzelm@44113: wenzelm@44113: For example, see @{file "~~/src/HOL/Sum_Type.thy"} for the wenzelm@44113: representation of the primitive sum type as fully-featured datatype. wenzelm@44112: wenzelm@44112: \end{description} wenzelm@44112: wenzelm@44113: The generated rules for @{method induct} and @{method cases} provide wenzelm@44113: case names according to the given constructors, while parameters are wenzelm@44113: named after the types (see also \secref{sec:cases-induct}). wenzelm@44112: wenzelm@44112: See \cite{isabelle-HOL} for more details on datatypes, but beware of wenzelm@44112: the old-style theory syntax being used there! Apart from proper wenzelm@44112: proof methods for case-analysis and induction, there are also wenzelm@44112: emulations of ML tactics @{method (HOL) case_tac} and @{method (HOL) wenzelm@44112: induct_tac} available, see \secref{sec:hol-induct-tac}; these admit wenzelm@44112: to refer directly to the internal structure of subgoals (including wenzelm@44112: internally bound parameters). wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@44114: subsubsection {* Examples *} wenzelm@44114: wenzelm@44114: text {* We define a type of finite sequences, with slightly different wenzelm@44114: names than the existing @{typ "'a list"} that is already in @{theory wenzelm@44114: Main}: *} wenzelm@44114: wenzelm@44114: datatype 'a seq = Empty | Seq 'a "'a seq" wenzelm@44114: wenzelm@44114: text {* We can now prove some simple lemma by structural induction: *} wenzelm@44114: wenzelm@44114: lemma "Seq x xs \ xs" wenzelm@44114: proof (induct xs arbitrary: x) wenzelm@44114: case Empty wenzelm@44114: txt {* This case can be proved using the simplifier: the freeness wenzelm@44114: properties of the datatype are already declared as @{attribute wenzelm@44114: simp} rules. *} wenzelm@44114: show "Seq x Empty \ Empty" wenzelm@44114: by simp wenzelm@44114: next wenzelm@44114: case (Seq y ys) wenzelm@44114: txt {* The step case is proved similarly. *} wenzelm@44114: show "Seq x (Seq y ys) \ Seq y ys" wenzelm@44114: using `Seq y ys \ ys` by simp wenzelm@44114: qed wenzelm@44114: wenzelm@44114: text {* Here is a more succinct version of the same proof: *} wenzelm@44114: wenzelm@44114: lemma "Seq x xs \ xs" wenzelm@44114: by (induct xs arbitrary: x) simp_all wenzelm@44114: wenzelm@44114: wenzelm@44112: section {* Records \label{sec:hol-record} *} wenzelm@44112: wenzelm@44112: text {* wenzelm@44112: In principle, records merely generalize the concept of tuples, where wenzelm@44112: components may be addressed by labels instead of just position. The wenzelm@44112: logical infrastructure of records in Isabelle/HOL is slightly more wenzelm@44112: advanced, though, supporting truly extensible record schemes. This wenzelm@44112: admits operations that are polymorphic with respect to record wenzelm@44112: extension, yielding ``object-oriented'' effects like (single) wenzelm@44112: inheritance. See also \cite{NaraschewskiW-TPHOLs98} for more wenzelm@44112: details on object-oriented verification and record subtyping in HOL. wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@44112: subsection {* Basic concepts *} wenzelm@44112: wenzelm@44112: text {* wenzelm@44112: Isabelle/HOL supports both \emph{fixed} and \emph{schematic} records wenzelm@44112: at the level of terms and types. The notation is as follows: wenzelm@44112: wenzelm@44112: \begin{center} wenzelm@44112: \begin{tabular}{l|l|l} wenzelm@44112: & record terms & record types \\ \hline wenzelm@44112: fixed & @{text "\x = a, y = b\"} & @{text "\x :: A, y :: B\"} \\ wenzelm@44112: schematic & @{text "\x = a, y = b, \ = m\"} & wenzelm@44112: @{text "\x :: A, y :: B, \ :: M\"} \\ wenzelm@44112: \end{tabular} wenzelm@44112: \end{center} wenzelm@44112: wenzelm@44112: \noindent The ASCII representation of @{text "\x = a\"} is @{text wenzelm@44112: "(| x = a |)"}. wenzelm@44112: wenzelm@44112: A fixed record @{text "\x = a, y = b\"} has field @{text x} of value wenzelm@44112: @{text a} and field @{text y} of value @{text b}. The corresponding wenzelm@44112: type is @{text "\x :: A, y :: B\"}, assuming that @{text "a :: A"} wenzelm@44112: and @{text "b :: B"}. wenzelm@44112: wenzelm@44112: A record scheme like @{text "\x = a, y = b, \ = m\"} contains fields wenzelm@44112: @{text x} and @{text y} as before, but also possibly further fields wenzelm@44112: as indicated by the ``@{text "\"}'' notation (which is actually part wenzelm@44112: of the syntax). The improper field ``@{text "\"}'' of a record wenzelm@44112: scheme is called the \emph{more part}. Logically it is just a free wenzelm@44112: variable, which is occasionally referred to as ``row variable'' in wenzelm@44112: the literature. The more part of a record scheme may be wenzelm@44112: instantiated by zero or more further components. For example, the wenzelm@44112: previous scheme may get instantiated to @{text "\x = a, y = b, z = wenzelm@44112: c, \ = m'\"}, where @{text m'} refers to a different more part. wenzelm@44112: Fixed records are special instances of record schemes, where wenzelm@44112: ``@{text "\"}'' is properly terminated by the @{text "() :: unit"} wenzelm@44112: element. In fact, @{text "\x = a, y = b\"} is just an abbreviation wenzelm@44112: for @{text "\x = a, y = b, \ = ()\"}. wenzelm@44112: wenzelm@44112: \medskip Two key observations make extensible records in a simply wenzelm@44112: typed language like HOL work out: wenzelm@44112: wenzelm@44112: \begin{enumerate} wenzelm@44112: wenzelm@44112: \item the more part is internalized, as a free term or type wenzelm@44112: variable, wenzelm@44112: wenzelm@44112: \item field names are externalized, they cannot be accessed within wenzelm@44112: the logic as first-class values. wenzelm@44112: wenzelm@44112: \end{enumerate} wenzelm@44112: wenzelm@44112: \medskip In Isabelle/HOL record types have to be defined explicitly, wenzelm@44112: fixing their field names and types, and their (optional) parent wenzelm@44112: record. Afterwards, records may be formed using above syntax, while wenzelm@44112: obeying the canonical order of fields as given by their declaration. wenzelm@44112: The record package provides several standard operations like wenzelm@44112: selectors and updates. The common setup for various generic proof wenzelm@44112: tools enable succinct reasoning patterns. See also the Isabelle/HOL wenzelm@44112: tutorial \cite{isabelle-hol-book} for further instructions on using wenzelm@44112: records in practice. wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@44112: subsection {* Record specifications *} wenzelm@44112: wenzelm@44112: text {* wenzelm@44112: \begin{matharray}{rcl} wenzelm@44112: @{command_def (HOL) "record"} & : & @{text "theory \ theory"} \\ wenzelm@44112: \end{matharray} wenzelm@44112: wenzelm@44112: @{rail " wenzelm@44112: @@{command (HOL) record} @{syntax typespec_sorts} '=' \\ wenzelm@44112: (@{syntax type} '+')? (@{syntax constdecl} +) wenzelm@44112: "} wenzelm@44112: wenzelm@44112: \begin{description} wenzelm@44112: wenzelm@44112: \item @{command (HOL) "record"}~@{text "(\\<^sub>1, \, \\<^sub>m) t = \ + c\<^sub>1 :: \\<^sub>1 wenzelm@44112: \ c\<^sub>n :: \\<^sub>n"} defines extensible record type @{text "(\\<^sub>1, \, \\<^sub>m) t"}, wenzelm@44112: derived from the optional parent record @{text "\"} by adding new wenzelm@44112: field components @{text "c\<^sub>i :: \\<^sub>i"} etc. wenzelm@44112: wenzelm@44112: The type variables of @{text "\"} and @{text "\\<^sub>i"} need to be wenzelm@44112: covered by the (distinct) parameters @{text "\\<^sub>1, \, wenzelm@44112: \\<^sub>m"}. Type constructor @{text t} has to be new, while @{text wenzelm@44112: \} needs to specify an instance of an existing record type. At wenzelm@44112: least one new field @{text "c\<^sub>i"} has to be specified. wenzelm@44112: Basically, field names need to belong to a unique record. This is wenzelm@44112: not a real restriction in practice, since fields are qualified by wenzelm@44112: the record name internally. wenzelm@44112: wenzelm@44112: The parent record specification @{text \} is optional; if omitted wenzelm@44112: @{text t} becomes a root record. The hierarchy of all records wenzelm@44112: declared within a theory context forms a forest structure, i.e.\ a wenzelm@44112: set of trees starting with a root record each. There is no way to wenzelm@44112: merge multiple parent records! wenzelm@44112: wenzelm@44112: For convenience, @{text "(\\<^sub>1, \, \\<^sub>m) t"} is made a wenzelm@44112: type abbreviation for the fixed record type @{text "\c\<^sub>1 :: wenzelm@44112: \\<^sub>1, \, c\<^sub>n :: \\<^sub>n\"}, likewise is @{text wenzelm@44112: "(\\<^sub>1, \, \\<^sub>m, \) t_scheme"} made an abbreviation for wenzelm@44112: @{text "\c\<^sub>1 :: \\<^sub>1, \, c\<^sub>n :: \\<^sub>n, \ :: wenzelm@44112: \\"}. wenzelm@44112: wenzelm@44112: \end{description} wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@44112: subsection {* Record operations *} wenzelm@44112: wenzelm@44112: text {* wenzelm@44112: Any record definition of the form presented above produces certain wenzelm@44112: standard operations. Selectors and updates are provided for any wenzelm@44112: field, including the improper one ``@{text more}''. There are also wenzelm@44112: cumulative record constructor functions. To simplify the wenzelm@44112: presentation below, we assume for now that @{text "(\\<^sub>1, \, wenzelm@44112: \\<^sub>m) t"} is a root record with fields @{text "c\<^sub>1 :: wenzelm@44112: \\<^sub>1, \, c\<^sub>n :: \\<^sub>n"}. wenzelm@44112: wenzelm@44112: \medskip \textbf{Selectors} and \textbf{updates} are available for wenzelm@44112: any field (including ``@{text more}''): wenzelm@44112: wenzelm@44112: \begin{matharray}{lll} wenzelm@44112: @{text "c\<^sub>i"} & @{text "::"} & @{text "\\<^vec>c :: \<^vec>\, \ :: \\ \ \\<^sub>i"} \\ wenzelm@44112: @{text "c\<^sub>i_update"} & @{text "::"} & @{text "\\<^sub>i \ \\<^vec>c :: \<^vec>\, \ :: \\ \ \\<^vec>c :: \<^vec>\, \ :: \\"} \\ wenzelm@44112: \end{matharray} wenzelm@44112: wenzelm@44112: There is special syntax for application of updates: @{text "r\x := wenzelm@44112: a\"} abbreviates term @{text "x_update a r"}. Further notation for wenzelm@44112: repeated updates is also available: @{text "r\x := a\\y := b\\z := wenzelm@44112: c\"} may be written @{text "r\x := a, y := b, z := c\"}. Note that wenzelm@44112: because of postfix notation the order of fields shown here is wenzelm@44112: reverse than in the actual term. Since repeated updates are just wenzelm@44112: function applications, fields may be freely permuted in @{text "\x wenzelm@44112: := a, y := b, z := c\"}, as far as logical equality is concerned. wenzelm@44112: Thus commutativity of independent updates can be proven within the wenzelm@44112: logic for any two fields, but not as a general theorem. wenzelm@44112: wenzelm@44112: \medskip The \textbf{make} operation provides a cumulative record wenzelm@44112: constructor function: wenzelm@44112: wenzelm@44112: \begin{matharray}{lll} wenzelm@44112: @{text "t.make"} & @{text "::"} & @{text "\\<^sub>1 \ \ \\<^sub>n \ \\<^vec>c :: \<^vec>\\"} \\ wenzelm@44112: \end{matharray} wenzelm@44112: wenzelm@44112: \medskip We now reconsider the case of non-root records, which are wenzelm@44112: derived of some parent. In general, the latter may depend on wenzelm@44112: another parent as well, resulting in a list of \emph{ancestor wenzelm@44112: records}. Appending the lists of fields of all ancestors results in wenzelm@44112: a certain field prefix. The record package automatically takes care wenzelm@44112: of this by lifting operations over this context of ancestor fields. wenzelm@44112: Assuming that @{text "(\\<^sub>1, \, \\<^sub>m) t"} has ancestor wenzelm@44112: fields @{text "b\<^sub>1 :: \\<^sub>1, \, b\<^sub>k :: \\<^sub>k"}, wenzelm@44112: the above record operations will get the following types: wenzelm@44112: wenzelm@44112: \medskip wenzelm@44112: \begin{tabular}{lll} wenzelm@44112: @{text "c\<^sub>i"} & @{text "::"} & @{text "\\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\ \ \\<^sub>i"} \\ wenzelm@44112: @{text "c\<^sub>i_update"} & @{text "::"} & @{text "\\<^sub>i \ wenzelm@44112: \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\ \ wenzelm@44112: \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\"} \\ wenzelm@44112: @{text "t.make"} & @{text "::"} & @{text "\\<^sub>1 \ \ \\<^sub>k \ \\<^sub>1 \ \ \\<^sub>n \ wenzelm@44112: \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\\"} \\ wenzelm@44112: \end{tabular} wenzelm@44112: \medskip wenzelm@44112: wenzelm@44112: \noindent Some further operations address the extension aspect of a wenzelm@44112: derived record scheme specifically: @{text "t.fields"} produces a wenzelm@44112: record fragment consisting of exactly the new fields introduced here wenzelm@44112: (the result may serve as a more part elsewhere); @{text "t.extend"} wenzelm@44112: takes a fixed record and adds a given more part; @{text wenzelm@44112: "t.truncate"} restricts a record scheme to a fixed record. wenzelm@44112: wenzelm@44112: \medskip wenzelm@44112: \begin{tabular}{lll} wenzelm@44112: @{text "t.fields"} & @{text "::"} & @{text "\\<^sub>1 \ \ \\<^sub>n \ \\<^vec>c :: \<^vec>\\"} \\ wenzelm@44112: @{text "t.extend"} & @{text "::"} & @{text "\\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\\ \ wenzelm@44112: \ \ \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\"} \\ wenzelm@44112: @{text "t.truncate"} & @{text "::"} & @{text "\\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\, \ :: \\ \ \\<^vec>b :: \<^vec>\, \<^vec>c :: \<^vec>\\"} \\ wenzelm@44112: \end{tabular} wenzelm@44112: \medskip wenzelm@44112: wenzelm@44112: \noindent Note that @{text "t.make"} and @{text "t.fields"} coincide wenzelm@44112: for root records. wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@44112: subsection {* Derived rules and proof tools *} wenzelm@44112: wenzelm@44112: text {* wenzelm@44112: The record package proves several results internally, declaring wenzelm@44112: these facts to appropriate proof tools. This enables users to wenzelm@44112: reason about record structures quite conveniently. Assume that wenzelm@44112: @{text t} is a record type as specified above. wenzelm@44112: wenzelm@44112: \begin{enumerate} wenzelm@44112: wenzelm@44112: \item Standard conversions for selectors or updates applied to wenzelm@44112: record constructor terms are made part of the default Simplifier wenzelm@44112: context; thus proofs by reduction of basic operations merely require wenzelm@44112: the @{method simp} method without further arguments. These rules wenzelm@44112: are available as @{text "t.simps"}, too. wenzelm@44112: wenzelm@44112: \item Selectors applied to updated records are automatically reduced wenzelm@44112: by an internal simplification procedure, which is also part of the wenzelm@44112: standard Simplifier setup. wenzelm@44112: wenzelm@44112: \item Inject equations of a form analogous to @{prop "(x, y) = (x', wenzelm@44112: y') \ x = x' \ y = y'"} are declared to the Simplifier and Classical wenzelm@44112: Reasoner as @{attribute iff} rules. These rules are available as wenzelm@44112: @{text "t.iffs"}. wenzelm@44112: wenzelm@44112: \item The introduction rule for record equality analogous to @{text wenzelm@44112: "x r = x r' \ y r = y r' \ \ r = r'"} is declared to the Simplifier, wenzelm@44112: and as the basic rule context as ``@{attribute intro}@{text "?"}''. wenzelm@44112: The rule is called @{text "t.equality"}. wenzelm@44112: wenzelm@44112: \item Representations of arbitrary record expressions as canonical wenzelm@44112: constructor terms are provided both in @{method cases} and @{method wenzelm@44112: induct} format (cf.\ the generic proof methods of the same name, wenzelm@44112: \secref{sec:cases-induct}). Several variations are available, for wenzelm@44112: fixed records, record schemes, more parts etc. wenzelm@44112: wenzelm@44112: The generic proof methods are sufficiently smart to pick the most wenzelm@44112: sensible rule according to the type of the indicated record wenzelm@44112: expression: users just need to apply something like ``@{text "(cases wenzelm@44112: r)"}'' to a certain proof problem. wenzelm@44112: wenzelm@44112: \item The derived record operations @{text "t.make"}, @{text wenzelm@44112: "t.fields"}, @{text "t.extend"}, @{text "t.truncate"} are \emph{not} wenzelm@44112: treated automatically, but usually need to be expanded by hand, wenzelm@44112: using the collective fact @{text "t.defs"}. wenzelm@44112: wenzelm@44112: \end{enumerate} wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@44115: subsubsection {* Examples *} wenzelm@44115: wenzelm@44115: text {* See @{file "~~/src/HOL/ex/Records.thy"}, for example. *} wenzelm@44115: wenzelm@44115: wenzelm@44112: section {* Adhoc tuples *} wenzelm@44112: wenzelm@44112: text {* wenzelm@44112: \begin{matharray}{rcl} wenzelm@44112: @{attribute_def (HOL) split_format}@{text "\<^sup>*"} & : & @{text attribute} \\ wenzelm@44112: \end{matharray} wenzelm@44112: wenzelm@44112: @{rail " wenzelm@44112: @@{attribute (HOL) split_format} ('(' 'complete' ')')? wenzelm@44112: "} wenzelm@44112: wenzelm@44112: \begin{description} wenzelm@44112: wenzelm@44112: \item @{attribute (HOL) split_format}\ @{text "(complete)"} causes wenzelm@44112: arguments in function applications to be represented canonically wenzelm@44112: according to their tuple type structure. wenzelm@44112: wenzelm@44112: Note that this operation tends to invent funny names for new local wenzelm@44112: parameters introduced. wenzelm@44112: wenzelm@44112: \end{description} wenzelm@44112: *} wenzelm@44112: wenzelm@44112: wenzelm@35757: section {* Typedef axiomatization \label{sec:hol-typedef} *} wenzelm@26849: wenzelm@44111: text {* A Gordon/HOL-style type definition is a certain axiom scheme wenzelm@44111: that identifies a new type with a subset of an existing type. More wenzelm@44111: precisely, the new type is defined by exhibiting an existing type wenzelm@44111: @{text \}, a set @{text "A :: \ set"}, and a theorem that proves wenzelm@44111: @{prop "\x. x \ A"}. Thus @{text A} is a non-empty subset of @{text wenzelm@44111: \}, and the new type denotes this subset. New functions are wenzelm@44111: postulated that establish an isomorphism between the new type and wenzelm@44111: the subset. In general, the type @{text \} may involve type wenzelm@44111: variables @{text "\\<^sub>1, \, \\<^sub>n"} which means that the type definition wenzelm@44111: produces a type constructor @{text "(\\<^sub>1, \, \\<^sub>n) t"} depending on wenzelm@44111: those type arguments. wenzelm@44111: wenzelm@44111: The axiomatization can be considered a ``definition'' in the sense wenzelm@44111: of the particular set-theoretic interpretation of HOL wenzelm@44111: \cite{pitts93}, where the universe of types is required to be wenzelm@44111: downwards-closed wrt.\ arbitrary non-empty subsets. Thus genuinely wenzelm@44111: new types introduced by @{command "typedef"} stay within the range wenzelm@44111: of HOL models by construction. Note that @{command_ref wenzelm@44111: type_synonym} from Isabelle/Pure merely introduces syntactic wenzelm@44111: abbreviations, without any logical significance. wenzelm@44111: wenzelm@26849: \begin{matharray}{rcl} wenzelm@35757: @{command_def (HOL) "typedef"} & : & @{text "local_theory \ proof(prove)"} \\ wenzelm@26849: \end{matharray} wenzelm@26849: wenzelm@43467: @{rail " wenzelm@44111: @@{command (HOL) typedef} alt_name? abs_type '=' rep_set wenzelm@26849: ; wenzelm@26849: wenzelm@44111: alt_name: '(' (@{syntax name} | @'open' | @'open' @{syntax name}) ')' wenzelm@26849: ; wenzelm@44111: abs_type: @{syntax typespec_sorts} @{syntax mixfix}? wenzelm@26849: ; wenzelm@44111: rep_set: @{syntax term} (@'morphisms' @{syntax name} @{syntax name})? wenzelm@43467: "} wenzelm@26849: wenzelm@28760: \begin{description} wenzelm@42994: wenzelm@35757: \item @{command (HOL) "typedef"}~@{text "(\\<^sub>1, \, \\<^sub>n) t = A"} wenzelm@44111: axiomatizes a type definition in the background theory of the wenzelm@44111: current context, depending on a non-emptiness result of the set wenzelm@44111: @{text A} that needs to be proven here. The set @{text A} may wenzelm@44111: contain type variables @{text "\\<^sub>1, \, \\<^sub>n"} as specified on the LHS, wenzelm@44111: but no term variables. wenzelm@35757: wenzelm@44111: Even though a local theory specification, the newly introduced type wenzelm@44111: constructor cannot depend on parameters or assumptions of the wenzelm@44111: context: this is structurally impossible in HOL. In contrast, the wenzelm@44111: non-emptiness proof may use local assumptions in unusual situations, wenzelm@44111: which could result in different interpretations in target contexts: wenzelm@44111: the meaning of the bijection between the representing set @{text A} wenzelm@44111: and the new type @{text t} may then change in different application wenzelm@44111: contexts. wenzelm@42994: wenzelm@44111: By default, @{command (HOL) "typedef"} defines both a type wenzelm@44111: constructor @{text t} for the new type, and a term constant @{text wenzelm@44111: t} for the representing set within the old type. Use the ``@{text wenzelm@44111: "(open)"}'' option to suppress a separate constant definition wenzelm@35757: altogether. The injection from type to set is called @{text Rep_t}, wenzelm@44111: its inverse @{text Abs_t}, unless explicit @{keyword (HOL) wenzelm@44111: "morphisms"} specification provides alternative names. wenzelm@42994: wenzelm@44111: The core axiomatization uses the locale predicate @{const wenzelm@44111: type_definition} as defined in Isabelle/HOL. Various basic wenzelm@44111: consequences of that are instantiated accordingly, re-using the wenzelm@44111: locale facts with names derived from the new type constructor. Thus wenzelm@44111: the generic @{thm type_definition.Rep} is turned into the specific wenzelm@44111: @{text "Rep_t"}, for example. wenzelm@44111: wenzelm@44111: Theorems @{thm type_definition.Rep}, @{thm wenzelm@44111: type_definition.Rep_inverse}, and @{thm type_definition.Abs_inverse} wenzelm@44111: provide the most basic characterization as a corresponding wenzelm@44111: injection/surjection pair (in both directions). The derived rules wenzelm@44111: @{thm type_definition.Rep_inject} and @{thm wenzelm@44111: type_definition.Abs_inject} provide a more convenient version of wenzelm@44111: injectivity, suitable for automated proof tools (e.g.\ in wenzelm@44111: declarations involving @{attribute simp} or @{attribute iff}). wenzelm@44111: Furthermore, the rules @{thm type_definition.Rep_cases}~/ @{thm wenzelm@44111: type_definition.Rep_induct}, and @{thm type_definition.Abs_cases}~/ wenzelm@44111: @{thm type_definition.Abs_induct} provide alternative views on wenzelm@44111: surjectivity. These rules are already declared as set or type rules wenzelm@44111: for the generic @{method cases} and @{method induct} methods, wenzelm@44111: respectively. wenzelm@42994: wenzelm@35757: An alternative name for the set definition (and other derived wenzelm@35757: entities) may be specified in parentheses; the default is to use wenzelm@44111: @{text t} directly. wenzelm@26849: wenzelm@28760: \end{description} wenzelm@44111: wenzelm@44111: \begin{warn} wenzelm@44111: If you introduce a new type axiomatically, i.e.\ via @{command_ref wenzelm@44111: typedecl} and @{command_ref axiomatization}, the minimum requirement wenzelm@44111: is that it has a non-empty model, to avoid immediate collapse of the wenzelm@44111: HOL logic. Moreover, one needs to demonstrate that the wenzelm@44111: interpretation of such free-form axiomatizations can coexist with wenzelm@44111: that of the regular @{command_def typedef} scheme, and any extension wenzelm@44111: that other people might have introduced elsewhere (e.g.\ in HOLCF wenzelm@44111: \cite{MuellerNvOS99}). wenzelm@44111: \end{warn} wenzelm@26849: *} wenzelm@26849: wenzelm@44111: subsubsection {* Examples *} wenzelm@44111: wenzelm@44111: text {* Type definitions permit the introduction of abstract data wenzelm@44111: types in a safe way, namely by providing models based on already wenzelm@44111: existing types. Given some abstract axiomatic description @{text P} wenzelm@44111: of a type, this involves two steps: wenzelm@44111: wenzelm@44111: \begin{enumerate} wenzelm@44111: wenzelm@44111: \item Find an appropriate type @{text \} and subset @{text A} which wenzelm@44111: has the desired properties @{text P}, and make a type definition wenzelm@44111: based on this representation. wenzelm@44111: wenzelm@44111: \item Prove that @{text P} holds for @{text \} by lifting @{text P} wenzelm@44111: from the representation. wenzelm@44111: wenzelm@44111: \end{enumerate} wenzelm@44111: wenzelm@44111: You can later forget about the representation and work solely in wenzelm@44111: terms of the abstract properties @{text P}. wenzelm@44111: wenzelm@44111: \medskip The following trivial example pulls a three-element type wenzelm@44111: into existence within the formal logical environment of HOL. *} wenzelm@44111: wenzelm@44111: typedef three = "{(True, True), (True, False), (False, True)}" wenzelm@44111: by blast wenzelm@44111: wenzelm@44111: definition "One = Abs_three (True, True)" wenzelm@44111: definition "Two = Abs_three (True, False)" wenzelm@44111: definition "Three = Abs_three (False, True)" wenzelm@44111: wenzelm@44111: lemma three_distinct: "One \ Two" "One \ Three" "Two \ Three" wenzelm@44111: by (simp_all add: One_def Two_def Three_def Abs_three_inject three_def) wenzelm@44111: wenzelm@44111: lemma three_cases: wenzelm@44111: fixes x :: three obtains "x = One" | "x = Two" | "x = Three" wenzelm@44111: by (cases x) (auto simp: One_def Two_def Three_def Abs_three_inject three_def) wenzelm@44111: wenzelm@44111: text {* Note that such trivial constructions are better done with wenzelm@44111: derived specification mechanisms such as @{command datatype}: *} wenzelm@44111: wenzelm@44111: datatype three' = One' | Two' | Three' wenzelm@44111: wenzelm@44111: text {* This avoids re-doing basic definitions and proofs from the wenzelm@44111: primitive @{command typedef} above. *} wenzelm@44111: wenzelm@26849: haftmann@41644: section {* Functorial structure of types *} haftmann@41644: haftmann@41644: text {* haftmann@41644: \begin{matharray}{rcl} haftmann@41752: @{command_def (HOL) "enriched_type"} & : & @{text "local_theory \ proof(prove)"} haftmann@41644: \end{matharray} haftmann@41644: wenzelm@43467: @{rail " wenzelm@43488: @@{command (HOL) enriched_type} (@{syntax name} ':')? @{syntax term} haftmann@41644: ; wenzelm@43488: "} haftmann@41644: haftmann@41644: \begin{description} haftmann@41644: wenzelm@43488: \item @{command (HOL) "enriched_type"}~@{text "prefix: m"} allows to wenzelm@43488: prove and register properties about the functorial structure of type wenzelm@43488: constructors. These properties then can be used by other packages wenzelm@43488: to deal with those type constructors in certain type constructions. wenzelm@43488: Characteristic theorems are noted in the current local theory. By wenzelm@43488: default, they are prefixed with the base name of the type wenzelm@43488: constructor, an explicit prefix can be given alternatively. haftmann@41644: haftmann@41644: The given term @{text "m"} is considered as \emph{mapper} for the haftmann@41644: corresponding type constructor and must conform to the following haftmann@41644: type pattern: haftmann@41644: haftmann@41644: \begin{matharray}{lll} haftmann@41644: @{text "m"} & @{text "::"} & haftmann@41644: @{text "\\<^isub>1 \ \ \\<^isub>k \ (\<^vec>\\<^isub>n) t \ (\<^vec>\\<^isub>n) t"} \\ haftmann@41644: \end{matharray} haftmann@41644: haftmann@41644: \noindent where @{text t} is the type constructor, @{text haftmann@41644: "\<^vec>\\<^isub>n"} and @{text "\<^vec>\\<^isub>n"} are distinct haftmann@41644: type variables free in the local theory and @{text "\\<^isub>1"}, haftmann@41644: \ldots, @{text "\\<^isub>k"} is a subsequence of @{text "\\<^isub>1 \ haftmann@41644: \\<^isub>1"}, @{text "\\<^isub>1 \ \\<^isub>1"}, \ldots, haftmann@41644: @{text "\\<^isub>n \ \\<^isub>n"}, @{text "\\<^isub>n \ haftmann@41644: \\<^isub>n"}. haftmann@41644: haftmann@41644: \end{description} haftmann@41644: *} haftmann@41644: bulwahn@44864: section {* Quotient types *} bulwahn@44864: bulwahn@44864: text {* bulwahn@44864: The quotient package defines a new quotient type given a raw type bulwahn@44864: and a partial equivalence relation. bulwahn@44864: It also includes automation for transporting definitions and theorems. bulwahn@44864: It can automatically produce definitions and theorems on the quotient type, bulwahn@44864: given the corresponding constants and facts on the raw type. bulwahn@44864: bulwahn@44864: \begin{matharray}{rcl} bulwahn@44864: @{command_def (HOL) "quotient_type"} & : & @{text "local_theory \ proof(prove)"}\\ bulwahn@44864: @{command_def (HOL) "quotient_definition"} & : & @{text "local_theory \ proof(prove)"}\\ bulwahn@44864: @{command_def (HOL) "print_quotmaps"} & : & @{text "context \"}\\ bulwahn@44864: @{command_def (HOL) "print_quotients"} & : & @{text "context \"}\\ bulwahn@44864: @{command_def (HOL) "print_quotconsts"} & : & @{text "context \"}\\ bulwahn@44864: \end{matharray} bulwahn@44864: bulwahn@44864: @{rail " bulwahn@44864: @@{command (HOL) quotient_type} (spec + @'and'); bulwahn@44864: bulwahn@44864: spec: @{syntax typespec} @{syntax mixfix}? '=' \\ bulwahn@44864: @{syntax type} '/' ('partial' ':')? @{syntax term}; bulwahn@44864: "} bulwahn@44864: bulwahn@44864: @{rail " bulwahn@44864: @@{command (HOL) quotient_definition} constdecl? @{syntax thmdecl}? \\ bulwahn@44864: @{syntax term} 'is' @{syntax term}; bulwahn@44864: bulwahn@44864: constdecl: @{syntax name} ('::' @{syntax type})? @{syntax mixfix}? bulwahn@44864: "} bulwahn@44864: bulwahn@44864: \begin{description} bulwahn@44864: bulwahn@44864: \item @{command (HOL) "quotient_type"} defines quotient types. bulwahn@44864: bulwahn@44864: \item @{command (HOL) "quotient_definition"} defines a constant on the quotient type. bulwahn@44864: bulwahn@44864: \item @{command (HOL) "print_quotmaps"} prints quotient map functions. bulwahn@44864: bulwahn@44864: \item @{command (HOL) "print_quotients"} prints quotients. bulwahn@44864: bulwahn@44864: \item @{command (HOL) "print_quotconsts"} prints quotient constants. bulwahn@44864: bulwahn@44864: \end{description} bulwahn@44864: bulwahn@44864: *} haftmann@41644: wenzelm@26849: section {* Arithmetic proof support *} wenzelm@26849: wenzelm@26849: text {* wenzelm@26849: \begin{matharray}{rcl} wenzelm@28761: @{method_def (HOL) arith} & : & @{text method} \\ nipkow@30863: @{attribute_def (HOL) arith} & : & @{text attribute} \\ wenzelm@28761: @{attribute_def (HOL) arith_split} & : & @{text attribute} \\ wenzelm@26849: \end{matharray} wenzelm@26849: wenzelm@26849: The @{method (HOL) arith} method decides linear arithmetic problems wenzelm@26849: (on types @{text nat}, @{text int}, @{text real}). Any current wenzelm@26849: facts are inserted into the goal before running the procedure. wenzelm@26849: nipkow@30863: The @{attribute (HOL) arith} attribute declares facts that are nipkow@30863: always supplied to the arithmetic provers implicitly. nipkow@30863: wenzelm@26894: The @{attribute (HOL) arith_split} attribute declares case split wenzelm@30865: rules to be expanded before @{method (HOL) arith} is invoked. wenzelm@26849: nipkow@30863: Note that a simpler (but faster) arithmetic prover is nipkow@30863: already invoked by the Simplifier. wenzelm@26849: *} wenzelm@26849: wenzelm@26849: wenzelm@30169: section {* Intuitionistic proof search *} wenzelm@30169: wenzelm@30169: text {* wenzelm@30169: \begin{matharray}{rcl} wenzelm@30171: @{method_def (HOL) iprover} & : & @{text method} \\ wenzelm@30169: \end{matharray} wenzelm@30169: wenzelm@43467: @{rail " wenzelm@43467: @@{method (HOL) iprover} ( @{syntax rulemod} * ) wenzelm@43467: "} wenzelm@30169: wenzelm@30171: The @{method (HOL) iprover} method performs intuitionistic proof wenzelm@30171: search, depending on specifically declared rules from the context, wenzelm@30171: or given as explicit arguments. Chained facts are inserted into the wenzelm@35613: goal before commencing proof search. wenzelm@35613: wenzelm@30169: Rules need to be classified as @{attribute (Pure) intro}, wenzelm@30169: @{attribute (Pure) elim}, or @{attribute (Pure) dest}; here the wenzelm@30169: ``@{text "!"}'' indicator refers to ``safe'' rules, which may be wenzelm@30169: applied aggressively (without considering back-tracking later). wenzelm@30169: Rules declared with ``@{text "?"}'' are ignored in proof search (the wenzelm@43497: single-step @{method (Pure) rule} method still observes these). An wenzelm@30169: explicit weight annotation may be given as well; otherwise the wenzelm@30169: number of rule premises will be taken into account here. wenzelm@30169: *} wenzelm@30169: blanchet@44440: section {* Model Elimination and Resolution *} blanchet@44440: blanchet@44440: text {* blanchet@44440: \begin{matharray}{rcl} blanchet@44440: @{method_def (HOL) "meson"} & : & @{text method} \\ blanchet@44440: @{method_def (HOL) "metis"} & : & @{text method} \\ blanchet@44440: \end{matharray} blanchet@44440: blanchet@44440: @{rail " blanchet@44440: @@{method (HOL) meson} @{syntax thmrefs}? blanchet@44440: ; blanchet@44440: blanchet@44440: @@{method (HOL) metis} ( '(' ('partial_types' | 'full_types' | 'no_types' blanchet@44440: | @{syntax name}) ')' )? @{syntax thmrefs}? blanchet@44440: "} blanchet@44440: blanchet@44440: The @{method (HOL) meson} method implements Loveland's model elimination blanchet@44440: procedure \cite{loveland-78}. See @{file "~~/src/HOL/ex/Meson_Test.thy"} for blanchet@44440: examples. blanchet@44440: blanchet@44440: The @{method (HOL) metis} method combines ordered resolution and ordered blanchet@44440: paramodulation to find first-order (or mildly higher-order) proofs. The first blanchet@44440: optional argument specifies a type encoding; see the Sledgehammer manual blanchet@44440: \cite{isabelle-sledgehammer} for details. The @{file blanchet@44440: "~~/src/HOL/Metis_Examples"} directory contains several small theories blanchet@44440: developed to a large extent using Metis. blanchet@44440: *} wenzelm@30169: wenzelm@30171: section {* Coherent Logic *} wenzelm@30171: wenzelm@30171: text {* wenzelm@30171: \begin{matharray}{rcl} wenzelm@30171: @{method_def (HOL) "coherent"} & : & @{text method} \\ wenzelm@30171: \end{matharray} wenzelm@30171: wenzelm@43467: @{rail " wenzelm@43467: @@{method (HOL) coherent} @{syntax thmrefs}? wenzelm@43467: "} wenzelm@30171: wenzelm@30171: The @{method (HOL) coherent} method solves problems of wenzelm@30171: \emph{Coherent Logic} \cite{Bezem-Coquand:2005}, which covers wenzelm@30171: applications in confluence theory, lattice theory and projective wenzelm@41048: geometry. See @{file "~~/src/HOL/ex/Coherent.thy"} for some wenzelm@30171: examples. wenzelm@30171: *} wenzelm@30171: wenzelm@30171: blanchet@43082: section {* Proving propositions *} blanchet@43082: blanchet@43082: text {* blanchet@43082: In addition to the standard proof methods, a number of diagnosis blanchet@43082: tools search for proofs and provide an Isar proof snippet on success. blanchet@43082: These tools are available via the following commands. blanchet@43082: blanchet@43082: \begin{matharray}{rcl} blanchet@43082: @{command_def (HOL) "solve_direct"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ blanchet@43082: @{command_def (HOL) "try"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ blanchet@43857: @{command_def (HOL) "try_methods"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ blanchet@43082: @{command_def (HOL) "sledgehammer"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ blanchet@43082: @{command_def (HOL) "sledgehammer_params"} & : & @{text "theory \ theory"} blanchet@43082: \end{matharray} blanchet@43082: wenzelm@43467: @{rail " blanchet@43881: @@{command (HOL) try} blanchet@43881: ; blanchet@43881: blanchet@43857: @@{command (HOL) try_methods} ( ( ( 'simp' | 'intro' | 'elim' | 'dest' ) ':' @{syntax thmrefs} ) + ) ? wenzelm@43467: @{syntax nat}? wenzelm@43467: ; blanchet@43881: wenzelm@43467: @@{command (HOL) sledgehammer} ( '[' args ']' )? facts? @{syntax nat}? blanchet@43082: ; blanchet@43082: wenzelm@43467: @@{command (HOL) sledgehammer_params} ( ( '[' args ']' ) ? ) blanchet@43082: ; blanchet@43082: wenzelm@43467: args: ( @{syntax name} '=' value + ',' ) blanchet@43082: ; blanchet@43082: wenzelm@43467: facts: '(' ( ( ( ( 'add' | 'del' ) ':' ) ? @{syntax thmrefs} ) + ) ? ')' blanchet@43082: ; blanchet@43860: "} % FIXME check args "value" blanchet@43082: blanchet@43082: \begin{description} blanchet@43082: blanchet@43082: \item @{command (HOL) "solve_direct"} checks whether the current subgoals can blanchet@43082: be solved directly by an existing theorem. Duplicate lemmas can be detected blanchet@43082: in this way. blanchet@43082: blanchet@43857: \item @{command (HOL) "try_methods"} attempts to prove a subgoal using a combination blanchet@43082: of standard proof methods (@{text auto}, @{text simp}, @{text blast}, etc.). blanchet@43082: Additional facts supplied via @{text "simp:"}, @{text "intro:"}, blanchet@43082: @{text "elim:"}, and @{text "dest:"} are passed to the appropriate proof blanchet@43082: methods. blanchet@43082: bulwahn@44785: \item @{command (HOL) "try"} attempts to prove or disprove a subgoal bulwahn@44785: using a combination of provers and disprovers (@{text "solve_direct"}, bulwahn@44785: @{text "quickcheck"}, @{text "try_methods"}, @{text "sledgehammer"}, bulwahn@44785: @{text "nitpick"}). bulwahn@44785: blanchet@43082: \item @{command (HOL) "sledgehammer"} attempts to prove a subgoal using external blanchet@43082: automatic provers (resolution provers and SMT solvers). See the Sledgehammer blanchet@43082: manual \cite{isabelle-sledgehammer} for details. blanchet@43082: blanchet@43082: \item @{command (HOL) "sledgehammer_params"} changes blanchet@43082: @{command (HOL) "sledgehammer"} configuration options persistently. blanchet@43082: blanchet@43082: \end{description} blanchet@43082: *} blanchet@43082: blanchet@43082: haftmann@31906: section {* Checking and refuting propositions *} haftmann@31906: haftmann@31906: text {* haftmann@31906: Identifying incorrect propositions usually involves evaluation of blanchet@43082: particular assignments and systematic counterexample search. This haftmann@31906: is supported by the following commands. haftmann@31906: haftmann@31906: \begin{matharray}{rcl} haftmann@31906: @{command_def (HOL) "value"}@{text "\<^sup>*"} & : & @{text "context \"} \\ haftmann@31906: @{command_def (HOL) "quickcheck"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ blanchet@43082: @{command_def (HOL) "refute"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ blanchet@43082: @{command_def (HOL) "nitpick"}@{text "\<^sup>*"} & : & @{text "proof \"} \\ blanchet@43082: @{command_def (HOL) "quickcheck_params"} & : & @{text "theory \ theory"} \\ blanchet@43082: @{command_def (HOL) "refute_params"} & : & @{text "theory \ theory"} \\ blanchet@43082: @{command_def (HOL) "nitpick_params"} & : & @{text "theory \ theory"} haftmann@31906: \end{matharray} haftmann@31906: wenzelm@43467: @{rail " wenzelm@43467: @@{command (HOL) value} ( '[' name ']' )? modes? @{syntax term} haftmann@31906: ; haftmann@31906: wenzelm@43467: (@@{command (HOL) quickcheck} | @@{command (HOL) refute} | @@{command (HOL) nitpick}) wenzelm@43467: ( '[' args ']' )? @{syntax nat}? haftmann@31906: ; haftmann@31906: wenzelm@43467: (@@{command (HOL) quickcheck_params} | @@{command (HOL) refute_params} | wenzelm@43467: @@{command (HOL) nitpick_params}) ( '[' args ']' )? haftmann@31906: ; haftmann@31906: wenzelm@43467: modes: '(' (@{syntax name} +) ')' haftmann@31906: ; haftmann@31906: wenzelm@43467: args: ( @{syntax name} '=' value + ',' ) haftmann@31906: ; wenzelm@43467: "} % FIXME check "value" haftmann@31906: haftmann@31906: \begin{description} haftmann@31906: haftmann@31906: \item @{command (HOL) "value"}~@{text t} evaluates and prints a haftmann@31906: term; optionally @{text modes} can be specified, which are wenzelm@44130: appended to the current print mode; see \secref{sec:print-modes}. haftmann@31906: Internally, the evaluation is performed by registered evaluators, haftmann@31906: which are invoked sequentially until a result is returned. haftmann@31906: Alternatively a specific evaluator can be selected using square haftmann@37419: brackets; typical evaluators use the current set of code equations wenzelm@44130: to normalize and include @{text simp} for fully symbolic wenzelm@44130: evaluation using the simplifier, @{text nbe} for wenzelm@44130: \emph{normalization by evaluation} and \emph{code} for code wenzelm@44130: generation in SML. haftmann@31906: haftmann@31906: \item @{command (HOL) "quickcheck"} tests the current goal for blanchet@43082: counterexamples using a series of assignments for its haftmann@31906: free variables; by default the first subgoal is tested, an other haftmann@31906: can be selected explicitly using an optional goal index. bulwahn@41162: Assignments can be chosen exhausting the search space upto a given bulwahn@44785: size, or using a fixed number of random assignments in the search space, bulwahn@44785: or exploring the search space symbolically using narrowing. bulwahn@41162: By default, quickcheck uses exhaustive testing. haftmann@31906: A number of configuration options are supported for haftmann@31906: @{command (HOL) "quickcheck"}, notably: haftmann@31906: haftmann@31906: \begin{description} haftmann@31906: bulwahn@44785: \item[@{text tester}] specifies which testing approach to apply. bulwahn@44785: There are three testers, @{text exhaustive}, bulwahn@44785: @{text random}, and @{text narrowing}. bulwahn@41162: An unknown configuration option is treated as an argument to tester, bulwahn@41162: making @{text "tester ="} optional. bulwahn@44785: When multiple testers are given, these are applied in parallel. bulwahn@44785: If no tester is specified, quickcheck uses the testers that are bulwahn@44785: set active, i.e., configurations bulwahn@44785: @{text quickcheck_exhaustive_active}, @{text quickcheck_random_active}, bulwahn@44785: @{text quickcheck_narrowing_active} are set to true. wenzelm@40515: \item[@{text size}] specifies the maximum size of the search space wenzelm@40515: for assignment values. haftmann@31906: bulwahn@42956: \item[@{text eval}] takes a term or a list of terms and evaluates bulwahn@42956: these terms under the variable assignment found by quickcheck. wenzelm@42994: wenzelm@40515: \item[@{text iterations}] sets how many sets of assignments are wenzelm@40515: generated for each particular size. haftmann@31906: wenzelm@40515: \item[@{text no_assms}] specifies whether assumptions in wenzelm@40515: structured proofs should be ignored. blanchet@35331: wenzelm@40515: \item[@{text timeout}] sets the time limit in seconds. bulwahn@40480: wenzelm@40515: \item[@{text default_type}] sets the type(s) generally used to wenzelm@40515: instantiate type variables. bulwahn@40480: wenzelm@40515: \item[@{text report}] if set quickcheck reports how many tests wenzelm@40515: fulfilled the preconditions. bulwahn@40480: wenzelm@40515: \item[@{text quiet}] if not set quickcheck informs about the wenzelm@40515: current size for assignment values. bulwahn@40480: wenzelm@40515: \item[@{text expect}] can be used to check if the user's wenzelm@40515: expectation was met (@{text no_expectation}, @{text wenzelm@40515: no_counterexample}, or @{text counterexample}). bulwahn@40480: haftmann@31906: \end{description} haftmann@31906: haftmann@31906: These option can be given within square brackets. haftmann@31906: blanchet@43082: \item @{command (HOL) "quickcheck_params"} changes blanchet@43082: @{command (HOL) "quickcheck"} configuration options persistently. blanchet@43082: blanchet@43082: \item @{command (HOL) "refute"} tests the current goal for blanchet@43082: counterexamples using a reduction to SAT. The following configuration blanchet@43082: options are supported: blanchet@43082: blanchet@43082: \begin{description} blanchet@43082: blanchet@43082: \item[@{text minsize}] specifies the minimum size (cardinality) of the blanchet@43082: models to search for. blanchet@43082: blanchet@43082: \item[@{text maxsize}] specifies the maximum size (cardinality) of the blanchet@43082: models to search for. Nonpositive values mean $\infty$. blanchet@43082: blanchet@43082: \item[@{text maxvars}] specifies the maximum number of Boolean variables blanchet@43082: to use when transforming the term into a propositional formula. blanchet@43082: Nonpositive values mean $\infty$. blanchet@43082: blanchet@43082: \item[@{text satsolver}] specifies the SAT solver to use. blanchet@43082: blanchet@43082: \item[@{text no_assms}] specifies whether assumptions in blanchet@43082: structured proofs should be ignored. blanchet@43082: blanchet@43082: \item[@{text maxtime}] sets the time limit in seconds. blanchet@43082: blanchet@43082: \item[@{text expect}] can be used to check if the user's blanchet@43082: expectation was met (@{text genuine}, @{text potential}, blanchet@43082: @{text none}, or @{text unknown}). blanchet@43082: blanchet@43082: \end{description} blanchet@43082: blanchet@43082: These option can be given within square brackets. blanchet@43082: blanchet@43082: \item @{command (HOL) "refute_params"} changes blanchet@43082: @{command (HOL) "refute"} configuration options persistently. blanchet@43082: blanchet@43082: \item @{command (HOL) "nitpick"} tests the current goal for counterexamples blanchet@43082: using a reduction to first-order relational logic. See the Nitpick manual blanchet@43082: \cite{isabelle-nitpick} for details. blanchet@43082: blanchet@43082: \item @{command (HOL) "nitpick_params"} changes blanchet@43082: @{command (HOL) "nitpick"} configuration options persistently. haftmann@31906: haftmann@31906: \end{description} haftmann@31906: *} haftmann@31906: haftmann@31906: wenzelm@28752: section {* Unstructured case analysis and induction \label{sec:hol-induct-tac} *} wenzelm@26849: wenzelm@26849: text {* wenzelm@27123: The following tools of Isabelle/HOL support cases analysis and wenzelm@27123: induction in unstructured tactic scripts; see also wenzelm@27123: \secref{sec:cases-induct} for proper Isar versions of similar ideas. wenzelm@26849: wenzelm@26849: \begin{matharray}{rcl} wenzelm@28761: @{method_def (HOL) case_tac}@{text "\<^sup>*"} & : & @{text method} \\ wenzelm@28761: @{method_def (HOL) induct_tac}@{text "\<^sup>*"} & : & @{text method} \\ wenzelm@28761: @{method_def (HOL) ind_cases}@{text "\<^sup>*"} & : & @{text method} \\ wenzelm@28761: @{command_def (HOL) "inductive_cases"}@{text "\<^sup>*"} & : & @{text "local_theory \ local_theory"} \\ wenzelm@26849: \end{matharray} wenzelm@26849: wenzelm@43467: @{rail " wenzelm@43576: @@{method (HOL) case_tac} @{syntax goal_spec}? @{syntax term} rule? wenzelm@26849: ; wenzelm@43576: @@{method (HOL) induct_tac} @{syntax goal_spec}? (@{syntax insts} * @'and') rule? wenzelm@26849: ; wenzelm@43467: @@{method (HOL) ind_cases} (@{syntax prop}+) (@'for' (@{syntax name}+))? wenzelm@26849: ; wenzelm@43467: @@{command (HOL) inductive_cases} (@{syntax thmdecl}? (@{syntax prop}+) + @'and') wenzelm@26849: ; wenzelm@26849: wenzelm@43467: rule: 'rule' ':' @{syntax thmref} wenzelm@43467: "} wenzelm@26849: wenzelm@28760: \begin{description} wenzelm@26849: wenzelm@28760: \item @{method (HOL) case_tac} and @{method (HOL) induct_tac} admit wenzelm@28760: to reason about inductive types. Rules are selected according to wenzelm@28760: the declarations by the @{attribute cases} and @{attribute induct} wenzelm@28760: attributes, cf.\ \secref{sec:cases-induct}. The @{command (HOL) wenzelm@28760: datatype} package already takes care of this. wenzelm@27123: wenzelm@27123: These unstructured tactics feature both goal addressing and dynamic wenzelm@26849: instantiation. Note that named rule cases are \emph{not} provided wenzelm@27123: as would be by the proper @{method cases} and @{method induct} proof wenzelm@27123: methods (see \secref{sec:cases-induct}). Unlike the @{method wenzelm@27123: induct} method, @{method induct_tac} does not handle structured rule wenzelm@27123: statements, only the compact object-logic conclusion of the subgoal wenzelm@27123: being addressed. wenzelm@42994: wenzelm@28760: \item @{method (HOL) ind_cases} and @{command (HOL) wenzelm@28760: "inductive_cases"} provide an interface to the internal @{ML_text wenzelm@26860: mk_cases} operation. Rules are simplified in an unrestricted wenzelm@26860: forward manner. wenzelm@26849: wenzelm@26849: While @{method (HOL) ind_cases} is a proof method to apply the wenzelm@26849: result immediately as elimination rules, @{command (HOL) wenzelm@26849: "inductive_cases"} provides case split theorems at the theory level wenzelm@26849: for later use. The @{keyword "for"} argument of the @{method (HOL) wenzelm@26849: ind_cases} method allows to specify a list of variables that should wenzelm@26849: be generalized before applying the resulting rule. wenzelm@26849: wenzelm@28760: \end{description} wenzelm@26849: *} wenzelm@26849: wenzelm@26849: wenzelm@26849: section {* Executable code *} wenzelm@26849: wenzelm@43498: text {* For validation purposes, it is often useful to \emph{execute} wenzelm@43498: specifications. In principle, execution could be simulated by wenzelm@43498: Isabelle's inference kernel, i.e. by a combination of resolution and wenzelm@43498: simplification. Unfortunately, this approach is rather inefficient. wenzelm@43498: A more efficient way of executing specifications is to translate wenzelm@43498: them into a functional programming language such as ML. wenzelm@26849: wenzelm@43498: Isabelle provides two generic frameworks to support code generation wenzelm@43498: from executable specifications. Isabelle/HOL instantiates these wenzelm@43498: mechanisms in a way that is amenable to end-user applications. wenzelm@43498: *} wenzelm@43498: wenzelm@43498: wenzelm@43498: subsection {* The new code generator (F. Haftmann) *} wenzelm@43498: wenzelm@43498: text {* This framework generates code from functional programs haftmann@37397: (including overloading using type classes) to SML \cite{SML}, OCaml haftmann@39049: \cite{OCaml}, Haskell \cite{haskell-revised-report} and Scala wenzelm@43498: \cite{scala-overview-tech-report}. Conceptually, code generation is wenzelm@43498: split up in three steps: \emph{selection} of code theorems, wenzelm@43498: \emph{translation} into an abstract executable view and wenzelm@43498: \emph{serialization} to a specific \emph{target language}. wenzelm@43498: Inductive specifications can be executed using the predicate wenzelm@43498: compiler which operates within HOL. See \cite{isabelle-codegen} for wenzelm@43498: an introduction. haftmann@37397: haftmann@37397: \begin{matharray}{rcl} haftmann@37397: @{command_def (HOL) "export_code"}@{text "\<^sup>*"} & : & @{text "context \"} \\ haftmann@37397: @{attribute_def (HOL) code} & : & @{text attribute} \\ haftmann@37397: @{command_def (HOL) "code_abort"} & : & @{text "theory \ theory"} \\ haftmann@37397: @{command_def (HOL) "code_datatype"} & : & @{text "theory \ theory"} \\ haftmann@37397: @{command_def (HOL) "print_codesetup"}@{text "\<^sup>*"} & : & @{text "context \"} \\ haftmann@37397: @{attribute_def (HOL) code_inline} & : & @{text attribute} \\ haftmann@37397: @{attribute_def (HOL) code_post} & : & @{text attribute} \\ haftmann@37397: @{command_def (HOL) "print_codeproc"}@{text "\<^sup>*"} & : & @{text "context \"} \\ haftmann@37397: @{command_def (HOL) "code_thms"}@{text "\<^sup>*"} & : & @{text "context \"} \\ haftmann@37397: @{command_def (HOL) "code_deps"}@{text "\<^sup>*"} & : & @{text "context \"} \\ haftmann@37397: @{command_def (HOL) "code_const"} & : & @{text "theory \ theory"} \\ haftmann@37397: @{command_def (HOL) "code_type"} & : & @{text "theory \ theory"} \\ haftmann@37397: @{command_def (HOL) "code_class"} & : & @{text "theory \ theory"} \\ haftmann@37397: @{command_def (HOL) "code_instance"} & : & @{text "theory \ theory"} \\ haftmann@37397: @{command_def (HOL) "code_reserved"} & : & @{text "theory \ theory"} \\ haftmann@37397: @{command_def (HOL) "code_monad"} & : & @{text "theory \ theory"} \\ haftmann@37397: @{command_def (HOL) "code_include"} & : & @{text "theory \ theory"} \\ haftmann@37397: @{command_def (HOL) "code_modulename"} & : & @{text "theory \ theory"} \\ haftmann@39832: @{command_def (HOL) "code_reflect"} & : & @{text "theory \ theory"} haftmann@37397: \end{matharray} haftmann@37397: wenzelm@43467: @{rail " wenzelm@43467: @@{command (HOL) export_code} ( constexpr + ) \\ wenzelm@43467: ( ( @'in' target ( @'module_name' @{syntax string} ) ? \\ wenzelm@43467: ( @'file' ( @{syntax string} | '-' ) ) ? ( '(' args ')' ) ?) + ) ? haftmann@37397: ; haftmann@37397: wenzelm@43467: const: @{syntax term} haftmann@37397: ; haftmann@37397: haftmann@40959: constexpr: ( const | 'name._' | '_' ) haftmann@37397: ; haftmann@37397: wenzelm@43467: typeconstructor: @{syntax nameref} haftmann@37397: ; haftmann@37397: wenzelm@43467: class: @{syntax nameref} haftmann@37397: ; haftmann@37397: haftmann@39049: target: 'SML' | 'OCaml' | 'Haskell' | 'Scala' haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{attribute (HOL) code} ( 'del' | 'abstype' | 'abstract' )? haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_abort} ( const + ) haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_datatype} ( const + ) haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{attribute (HOL) code_inline} ( 'del' ) ? haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{attribute (HOL) code_post} ( 'del' ) ? haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_thms} ( constexpr + ) ? haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_deps} ( constexpr + ) ? haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_const} (const + @'and') \\ wenzelm@43467: ( ( '(' target ( syntax ? + @'and' ) ')' ) + ) haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_type} (typeconstructor + @'and') \\ wenzelm@43467: ( ( '(' target ( syntax ? + @'and' ) ')' ) + ) haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_class} (class + @'and') \\ wenzelm@43467: ( ( '(' target \\ ( @{syntax string} ? + @'and' ) ')' ) + ) haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_instance} (( typeconstructor '::' class ) + @'and') \\ wenzelm@43467: ( ( '(' target ( '-' ? + @'and' ) ')' ) + ) haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_reserved} target ( @{syntax string} + ) haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_monad} const const target haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_include} target ( @{syntax string} ( @{syntax string} | '-') ) haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_modulename} target ( ( @{syntax string} @{syntax string} ) + ) haftmann@37397: ; haftmann@37397: wenzelm@43467: @@{command (HOL) code_reflect} @{syntax string} \\ wenzelm@43467: ( @'datatypes' ( @{syntax string} '=' ( '_' | ( @{syntax string} + '|' ) + @'and' ) ) ) ? \\ wenzelm@43467: ( @'functions' ( @{syntax string} + ) ) ? ( @'file' @{syntax string} ) ? haftmann@39832: ; haftmann@39832: wenzelm@43467: syntax: @{syntax string} | ( @'infix' | @'infixl' | @'infixr' ) @{syntax nat} @{syntax string} wenzelm@43467: "} haftmann@37397: haftmann@37397: \begin{description} haftmann@37397: haftmann@37397: \item @{command (HOL) "export_code"} generates code for a given list haftmann@39832: of constants in the specified target language(s). If no haftmann@39832: serialization instruction is given, only abstract code is generated haftmann@39832: internally. haftmann@37397: haftmann@37397: Constants may be specified by giving them literally, referring to haftmann@37397: all executable contants within a certain theory by giving @{text haftmann@37397: "name.*"}, or referring to \emph{all} executable constants currently haftmann@37397: available by giving @{text "*"}. haftmann@37397: haftmann@37397: By default, for each involved theory one corresponding name space haftmann@37397: module is generated. Alternativly, a module name may be specified haftmann@37397: after the @{keyword "module_name"} keyword; then \emph{all} code is haftmann@37397: placed in this module. haftmann@37397: haftmann@39832: For \emph{SML}, \emph{OCaml} and \emph{Scala} the file specification haftmann@39832: refers to a single file; for \emph{Haskell}, it refers to a whole haftmann@39832: directory, where code is generated in multiple files reflecting the haftmann@39832: module hierarchy. Omitting the file specification denotes standard haftmann@37748: output. haftmann@37397: haftmann@37397: Serializers take an optional list of arguments in parentheses. For haftmann@37397: \emph{SML} and \emph{OCaml}, ``@{text no_signatures}`` omits haftmann@37397: explicit module signatures. wenzelm@42994: haftmann@39832: For \emph{Haskell} a module name prefix may be given using the haftmann@39832: ``@{text "root:"}'' argument; ``@{text string_classes}'' adds a haftmann@39832: ``@{verbatim "deriving (Read, Show)"}'' clause to each appropriate haftmann@39832: datatype declaration. haftmann@37397: haftmann@37397: \item @{attribute (HOL) code} explicitly selects (or with option haftmann@38706: ``@{text "del"}'' deselects) a code equation for code generation. haftmann@38706: Usually packages introducing code equations provide a reasonable haftmann@38706: default setup for selection. Variants @{text "code abstype"} and haftmann@38706: @{text "code abstract"} declare abstract datatype certificates or haftmann@38706: code equations on abstract datatype representations respectively. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_abort"} declares constants which are not haftmann@39832: required to have a definition by means of code equations; if needed haftmann@39832: these are implemented by program abort instead. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_datatype"} specifies a constructor set haftmann@37397: for a logical type. haftmann@37397: haftmann@37397: \item @{command (HOL) "print_codesetup"} gives an overview on haftmann@37397: selected code equations and code generator datatypes. haftmann@37397: haftmann@39832: \item @{attribute (HOL) code_inline} declares (or with option haftmann@39832: ``@{text "del"}'' removes) inlining theorems which are applied as haftmann@39832: rewrite rules to any code equation during preprocessing. haftmann@37397: haftmann@39832: \item @{attribute (HOL) code_post} declares (or with option ``@{text haftmann@39832: "del"}'' removes) theorems which are applied as rewrite rules to any haftmann@39832: result of an evaluation. haftmann@37397: haftmann@39832: \item @{command (HOL) "print_codeproc"} prints the setup of the code haftmann@39832: generator preprocessor. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_thms"} prints a list of theorems haftmann@37397: representing the corresponding program containing all given haftmann@37397: constants after preprocessing. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_deps"} visualizes dependencies of haftmann@37397: theorems representing the corresponding program containing all given haftmann@37397: constants after preprocessing. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_const"} associates a list of constants haftmann@37397: with target-specific serializations; omitting a serialization haftmann@37397: deletes an existing serialization. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_type"} associates a list of type haftmann@37397: constructors with target-specific serializations; omitting a haftmann@37397: serialization deletes an existing serialization. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_class"} associates a list of classes haftmann@37397: with target-specific class names; omitting a serialization deletes haftmann@37397: an existing serialization. This applies only to \emph{Haskell}. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_instance"} declares a list of type haftmann@37397: constructor / class instance relations as ``already present'' for a haftmann@37397: given target. Omitting a ``@{text "-"}'' deletes an existing haftmann@37397: ``already present'' declaration. This applies only to haftmann@37397: \emph{Haskell}. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_reserved"} declares a list of names as haftmann@37397: reserved for a given target, preventing it to be shadowed by any haftmann@37397: generated code. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_monad"} provides an auxiliary mechanism haftmann@37397: to generate monadic code for Haskell. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_include"} adds arbitrary named content haftmann@37397: (``include'') to generated code. A ``@{text "-"}'' as last argument haftmann@37397: will remove an already added ``include''. haftmann@37397: haftmann@37397: \item @{command (HOL) "code_modulename"} declares aliasings from one haftmann@37397: module name onto another. haftmann@37397: haftmann@39832: \item @{command (HOL) "code_reflect"} without a ``@{text "file"}'' haftmann@39832: argument compiles code into the system runtime environment and haftmann@39832: modifies the code generator setup that future invocations of system haftmann@39832: runtime code generation referring to one of the ``@{text haftmann@39832: "datatypes"}'' or ``@{text "functions"}'' entities use these precompiled haftmann@39832: entities. With a ``@{text "file"}'' argument, the corresponding code haftmann@39832: is generated into that specified file without modifying the code haftmann@39832: generator setup. haftmann@39832: haftmann@37397: \end{description} wenzelm@43498: *} haftmann@37397: wenzelm@43498: wenzelm@43498: subsection {* The old code generator (S. Berghofer) *} wenzelm@43498: wenzelm@43498: text {* This framework generates code from both functional and wenzelm@43498: relational programs to SML, as explained below. wenzelm@26849: wenzelm@26849: \begin{matharray}{rcl} wenzelm@43498: @{command_def "code_module"} & : & @{text "theory \ theory"} \\ wenzelm@43498: @{command_def "code_library"} & : & @{text "theory \ theory"} \\ wenzelm@43498: @{command_def "consts_code"} & : & @{text "theory \ theory"} \\ wenzelm@43498: @{command_def "types_code"} & : & @{text "theory \ theory"} \\ wenzelm@43497: @{attribute_def code} & : & @{text attribute} \\ wenzelm@26849: \end{matharray} wenzelm@26849: wenzelm@43467: @{rail " wenzelm@43498: ( @@{command code_module} | @@{command code_library} ) modespec? @{syntax name}? \\ wenzelm@43467: ( @'file' name ) ? ( @'imports' ( @{syntax name} + ) ) ? \\ wenzelm@43467: @'contains' ( ( @{syntax name} '=' @{syntax term} ) + | @{syntax term} + ) wenzelm@26849: ; wenzelm@26849: wenzelm@43467: modespec: '(' ( @{syntax name} * ) ')' wenzelm@26849: ; wenzelm@26849: wenzelm@43467: @@{command (HOL) consts_code} (codespec +) wenzelm@26849: ; wenzelm@26849: wenzelm@26849: codespec: const template attachment ? wenzelm@26849: ; wenzelm@26849: wenzelm@43467: @@{command (HOL) types_code} (tycodespec +) wenzelm@26849: ; wenzelm@26849: wenzelm@43467: tycodespec: @{syntax name} template attachment ? wenzelm@26849: ; wenzelm@26849: wenzelm@43467: const: @{syntax term} wenzelm@26849: ; wenzelm@26849: wenzelm@43467: template: '(' @{syntax string} ')' wenzelm@26849: ; wenzelm@26849: wenzelm@43467: attachment: 'attach' modespec? '{' @{syntax text} '}' wenzelm@26849: ; wenzelm@26849: wenzelm@43497: @@{attribute code} name? wenzelm@43467: "} wenzelm@26849: *} wenzelm@26849: wenzelm@27045: wenzelm@43498: subsubsection {* Invoking the code generator *} wenzelm@43498: wenzelm@43498: text {* The code generator is invoked via the @{command code_module} wenzelm@43498: and @{command code_library} commands, which correspond to wenzelm@43498: \emph{incremental} and \emph{modular} code generation, respectively. wenzelm@43498: wenzelm@43498: \begin{description} wenzelm@43498: wenzelm@43498: \item [Modular] For each theory, an ML structure is generated, wenzelm@43498: containing the code generated from the constants defined in this wenzelm@43498: theory. wenzelm@43498: wenzelm@43498: \item [Incremental] All the generated code is emitted into the same wenzelm@43498: structure. This structure may import code from previously generated wenzelm@43498: structures, which can be specified via @{keyword "imports"}. wenzelm@43498: Moreover, the generated structure may also be referred to in later wenzelm@43498: invocations of the code generator. wenzelm@43498: wenzelm@43498: \end{description} wenzelm@43498: wenzelm@43498: After the @{command code_module} and @{command code_library} wenzelm@43498: keywords, the user may specify an optional list of ``modes'' in wenzelm@43498: parentheses. These can be used to instruct the code generator to wenzelm@43498: emit additional code for special purposes, e.g.\ functions for wenzelm@43498: converting elements of generated datatypes to Isabelle terms, or wenzelm@43498: test data generators. The list of modes is followed by a module wenzelm@43498: name. The module name is optional for modular code generation, but wenzelm@43498: must be specified for incremental code generation. wenzelm@43498: wenzelm@43498: The code can either be written to a file, in which case a file name wenzelm@43498: has to be specified after the @{keyword "file"} keyword, or be loaded wenzelm@43498: directly into Isabelle's ML environment. In the latter case, the wenzelm@43498: @{command ML} theory command can be used to inspect the results wenzelm@43498: interactively, for example. wenzelm@43498: wenzelm@43498: The terms from which to generate code can be specified after the wenzelm@43498: @{keyword "contains"} keyword, either as a list of bindings, or just wenzelm@43498: as a list of terms. In the latter case, the code generator just wenzelm@43498: produces code for all constants and types occuring in the term, but wenzelm@43498: does not bind the compiled terms to ML identifiers. wenzelm@43498: wenzelm@43498: Here is an example: wenzelm@43498: *} wenzelm@43498: wenzelm@43498: code_module Test wenzelm@43523: contains test = "foldl op + (0 :: int) [1, 2, 3, 4, 5]" wenzelm@43498: wenzelm@43498: text {* \noindent This binds the result of compiling the given term to wenzelm@43498: the ML identifier @{ML Test.test}. *} wenzelm@43498: wenzelm@43498: ML {* @{assert} (Test.test = 15) *} wenzelm@43498: wenzelm@43498: wenzelm@43498: subsubsection {* Configuring the code generator *} wenzelm@43498: wenzelm@43498: text {* When generating code for a complex term, the code generator wenzelm@43498: recursively calls itself for all subterms. When it arrives at a wenzelm@43498: constant, the default strategy of the code generator is to look up wenzelm@43498: its definition and try to generate code for it. Constants which wenzelm@43498: have no definitions that are immediately executable, may be wenzelm@43498: associated with a piece of ML code manually using the @{command_ref wenzelm@43498: consts_code} command. It takes a list whose elements consist of a wenzelm@43498: constant (given in usual term syntax -- an explicit type constraint wenzelm@43498: accounts for overloading), and a mixfix template describing the ML wenzelm@43498: code. The latter is very much the same as the mixfix templates used wenzelm@43498: when declaring new constants. The most notable difference is that wenzelm@43498: terms may be included in the ML template using antiquotation wenzelm@43498: brackets @{verbatim "{"}@{verbatim "*"}~@{text "..."}~@{verbatim wenzelm@43498: "*"}@{verbatim "}"}. wenzelm@43498: wenzelm@43498: A similar mechanism is available for types: @{command_ref wenzelm@43498: types_code} associates type constructors with specific ML code. wenzelm@43498: wenzelm@43498: For example, the following declarations copied from @{file wenzelm@43498: "~~/src/HOL/Product_Type.thy"} describe how the product type of wenzelm@43498: Isabelle/HOL should be compiled to ML. *} wenzelm@43498: wenzelm@43498: typedecl ('a, 'b) prod wenzelm@43498: consts Pair :: "'a \ 'b \ ('a, 'b) prod" wenzelm@43498: wenzelm@43498: types_code prod ("(_ */ _)") wenzelm@43498: consts_code Pair ("(_,/ _)") wenzelm@43498: wenzelm@43498: text {* Sometimes, the code associated with a constant or type may wenzelm@43498: need to refer to auxiliary functions, which have to be emitted when wenzelm@43498: the constant is used. Code for such auxiliary functions can be wenzelm@43498: declared using @{keyword "attach"}. For example, the @{const wfrec} wenzelm@43498: function can be implemented as follows: wenzelm@43498: *} wenzelm@43498: wenzelm@43498: consts_code wfrec ("\wfrec?") (* FIXME !? *) wenzelm@43523: attach {* fun wfrec f x = f (wfrec f) x *} wenzelm@43498: wenzelm@43498: text {* If the code containing a call to @{const wfrec} resides in an wenzelm@43498: ML structure different from the one containing the function wenzelm@43498: definition attached to @{const wfrec}, the name of the ML structure wenzelm@43498: (followed by a ``@{text "."}'') is inserted in place of ``@{text wenzelm@43498: "\"}'' in the above template. The ``@{text "?"}'' means that wenzelm@43498: the code generator should ignore the first argument of @{const wenzelm@43498: wfrec}, i.e.\ the termination relation, which is usually not wenzelm@43498: executable. wenzelm@43498: wenzelm@43498: \medskip Another possibility of configuring the code generator is to wenzelm@43498: register theorems to be used for code generation. Theorems can be wenzelm@43498: registered via the @{attribute code} attribute. It takes an optional wenzelm@43498: name as an argument, which indicates the format of the wenzelm@43498: theorem. Currently supported formats are equations (this is the wenzelm@43498: default when no name is specified) and horn clauses (this is wenzelm@43498: indicated by the name \texttt{ind}). The left-hand sides of wenzelm@43498: equations may only contain constructors and distinct variables, wenzelm@43498: whereas horn clauses must have the same format as introduction rules wenzelm@43498: of inductive definitions. wenzelm@43498: wenzelm@43498: The following example specifies three equations from which to wenzelm@43498: generate code for @{term "op <"} on natural numbers (see also wenzelm@43498: @{"file" "~~/src/HOL/Nat.thy"}). *} wenzelm@43498: wenzelm@43498: lemma [code]: "(Suc m < Suc n) = (m < n)" wenzelm@43498: and [code]: "((n::nat) < 0) = False" wenzelm@43498: and [code]: "(0 < Suc n) = True" by simp_all wenzelm@43498: wenzelm@43498: wenzelm@43498: subsubsection {* Specific HOL code generators *} wenzelm@43498: wenzelm@43498: text {* The basic code generator framework offered by Isabelle/Pure wenzelm@43498: has already been extended with additional code generators for wenzelm@43498: specific HOL constructs. These include datatypes, recursive wenzelm@43498: functions and inductive relations. The code generator for inductive wenzelm@43498: relations can handle expressions of the form @{text "(t\<^sub>1, \, t\<^sub>n) \ wenzelm@43498: r"}, where @{text "r"} is an inductively defined relation. If at wenzelm@43498: least one of the @{text "t\<^sub>i"} is a dummy pattern ``@{text "_"}'', wenzelm@43498: the above expression evaluates to a sequence of possible answers. If wenzelm@43498: all of the @{text "t\<^sub>i"} are proper terms, the expression evaluates wenzelm@43498: to a boolean value. wenzelm@43498: wenzelm@43523: The following example demonstrates this for beta-reduction on lambda wenzelm@43523: terms (see also @{"file" "~~/src/HOL/Proofs/Lambda/Lambda.thy"}). wenzelm@43523: *} wenzelm@43498: wenzelm@43523: datatype dB = wenzelm@43523: Var nat wenzelm@43523: | App dB dB (infixl "\" 200) wenzelm@43523: | Abs dB wenzelm@43523: wenzelm@43523: primrec lift :: "dB \ nat \ dB" wenzelm@43523: where wenzelm@43523: "lift (Var i) k = (if i < k then Var i else Var (i + 1))" wenzelm@43523: | "lift (s \ t) k = lift s k \ lift t k" wenzelm@43523: | "lift (Abs s) k = Abs (lift s (k + 1))" wenzelm@43523: wenzelm@43523: primrec subst :: "dB \ dB \ nat \ dB" ("_[_'/_]" [300, 0, 0] 300) wenzelm@43523: where wenzelm@43523: "(Var i)[s/k] = wenzelm@43523: (if k < i then Var (i - 1) else if i = k then s else Var i)" wenzelm@43523: | "(t \ u)[s/k] = t[s/k] \ u[s/k]" wenzelm@43523: | "(Abs t)[s/k] = Abs (t[lift s 0 / k+1])" wenzelm@43523: wenzelm@43523: inductive beta :: "dB \ dB \ bool" (infixl "\\<^sub>\" 50) wenzelm@43523: where wenzelm@43523: beta: "Abs s \ t \\<^sub>\ s[t/0]" wenzelm@43523: | appL: "s \\<^sub>\ t \ s \ u \\<^sub>\ t \ u" wenzelm@43523: | appR: "s \\<^sub>\ t \ u \ s \\<^sub>\ u \ t" wenzelm@43523: | abs: "s \\<^sub>\ t \ Abs s \\<^sub>\ Abs t" wenzelm@43523: wenzelm@43523: code_module Test wenzelm@43523: contains wenzelm@43523: test1 = "Abs (Var 0) \ Var 0 \\<^sub>\ Var 0" wenzelm@43523: test2 = "Abs (Abs (Var 0 \ Var 0) \ (Abs (Var 0) \ Var 0)) \\<^sub>\ _" wenzelm@43523: wenzelm@43523: text {* wenzelm@43523: In the above example, @{ML Test.test1} evaluates to a boolean, wenzelm@43523: whereas @{ML Test.test2} is a lazy sequence whose elements can be wenzelm@43523: inspected separately. wenzelm@43523: *} wenzelm@43523: wenzelm@43523: ML {* @{assert} Test.test1 *} wenzelm@43523: ML {* val results = DSeq.list_of Test.test2 *} wenzelm@43523: ML {* @{assert} (length results = 2) *} wenzelm@43523: wenzelm@43523: text {* wenzelm@43498: \medskip The theory underlying the HOL code generator is described wenzelm@43498: more detailed in \cite{Berghofer-Nipkow:2002}. More examples that wenzelm@43498: illustrate the usage of the code generator can be found e.g.\ in wenzelm@43498: @{"file" "~~/src/HOL/MicroJava/J/JListExample.thy"} and @{"file" wenzelm@43498: "~~/src/HOL/MicroJava/JVM/JVMListExample.thy"}. wenzelm@43498: *} wenzelm@43498: wenzelm@43498: wenzelm@27045: section {* Definition by specification \label{sec:hol-specification} *} wenzelm@27045: wenzelm@27045: text {* wenzelm@27045: \begin{matharray}{rcl} wenzelm@28761: @{command_def (HOL) "specification"} & : & @{text "theory \ proof(prove)"} \\ wenzelm@28761: @{command_def (HOL) "ax_specification"} & : & @{text "theory \ proof(prove)"} \\ wenzelm@27045: \end{matharray} wenzelm@27045: wenzelm@43467: @{rail " wenzelm@43467: (@@{command (HOL) specification} | @@{command (HOL) ax_specification}) wenzelm@43467: '(' (decl +) ')' \\ (@{syntax thmdecl}? @{syntax prop} +) wenzelm@27045: ; wenzelm@43467: decl: ((@{syntax name} ':')? @{syntax term} '(' @'overloaded' ')'?) wenzelm@43467: "} wenzelm@27045: wenzelm@28760: \begin{description} wenzelm@27045: wenzelm@28760: \item @{command (HOL) "specification"}~@{text "decls \"} sets up a wenzelm@27045: goal stating the existence of terms with the properties specified to wenzelm@27045: hold for the constants given in @{text decls}. After finishing the wenzelm@27045: proof, the theory will be augmented with definitions for the given wenzelm@27045: constants, as well as with theorems stating the properties for these wenzelm@27045: constants. wenzelm@27045: wenzelm@28760: \item @{command (HOL) "ax_specification"}~@{text "decls \"} sets up wenzelm@28760: a goal stating the existence of terms with the properties specified wenzelm@28760: to hold for the constants given in @{text decls}. After finishing wenzelm@28760: the proof, the theory will be augmented with axioms expressing the wenzelm@28760: properties given in the first place. wenzelm@27045: wenzelm@28760: \item @{text decl} declares a constant to be defined by the wenzelm@27045: specification given. The definition for the constant @{text c} is wenzelm@27045: bound to the name @{text c_def} unless a theorem name is given in wenzelm@27045: the declaration. Overloaded constants should be declared as such. wenzelm@27045: wenzelm@28760: \end{description} wenzelm@27045: wenzelm@27045: Whether to use @{command (HOL) "specification"} or @{command (HOL) wenzelm@27045: "ax_specification"} is to some extent a matter of style. @{command wenzelm@27045: (HOL) "specification"} introduces no new axioms, and so by wenzelm@27045: construction cannot introduce inconsistencies, whereas @{command wenzelm@27045: (HOL) "ax_specification"} does introduce axioms, but only after the wenzelm@27045: user has explicitly proven it to be safe. A practical issue must be wenzelm@27045: considered, though: After introducing two constants with the same wenzelm@27045: properties using @{command (HOL) "specification"}, one can prove wenzelm@27045: that the two constants are, in fact, equal. If this might be a wenzelm@27045: problem, one should use @{command (HOL) "ax_specification"}. wenzelm@27045: *} wenzelm@27045: wenzelm@26840: end