lcp@104: %% $Id$ lcp@291: \chapter{First-Order Sequent Calculus} lcp@316: \index{sequent calculus|(} lcp@316: paulson@7116: The theory~\thydx{LK} implements classical first-order logic through Gentzen's paulson@7116: sequent calculus (see Gallier~\cite{gallier86} or Takeuti~\cite{takeuti87}). paulson@7116: Resembling the method of semantic tableaux, the calculus is well suited for paulson@7116: backwards proof. Assertions have the form \(\Gamma\turn \Delta\), where paulson@7116: \(\Gamma\) and \(\Delta\) are lists of formulae. Associative unification, paulson@7116: simulated by higher-order unification, handles lists paulson@7116: (\S\ref{sec:assoc-unification} presents details, if you are interested). lcp@104: lcp@316: The logic is many-sorted, using Isabelle's type classes. The class of lcp@316: first-order terms is called \cldx{term}. No types of individuals are lcp@316: provided, but extensions can define types such as {\tt nat::term} and type lcp@316: constructors such as {\tt list::(term)term}. Below, the type variable lcp@316: $\alpha$ ranges over class {\tt term}; the equality symbol and quantifiers lcp@316: are polymorphic (many-sorted). The type of formulae is~\tydx{o}, which lcp@316: belongs to class {\tt logic}. lcp@104: wenzelm@9695: LK implements a classical logic theorem prover that is nearly as powerful as wenzelm@9695: the generic classical reasoner. The simplifier is now available too. lcp@104: paulson@5151: To work in LK, start up Isabelle specifying \texttt{Sequents} as the paulson@5151: object-logic. Once in Isabelle, change the context to theory \texttt{LK.thy}: paulson@5151: \begin{ttbox} paulson@5151: isabelle Sequents paulson@5151: context LK.thy; paulson@5151: \end{ttbox} paulson@19152: Modal logic and linear logic are also available, but unfortunately they are paulson@5151: not documented. paulson@5151: lcp@104: lcp@104: \begin{figure} lcp@104: \begin{center} lcp@104: \begin{tabular}{rrr} lcp@111: \it name &\it meta-type & \it description \\ lcp@316: \cdx{Trueprop}& $[sobj\To sobj, sobj\To sobj]\To prop$ & coercion to $prop$\\ lcp@316: \cdx{Seqof} & $[o,sobj]\To sobj$ & singleton sequence \\ lcp@316: \cdx{Not} & $o\To o$ & negation ($\neg$) \\ lcp@316: \cdx{True} & $o$ & tautology ($\top$) \\ lcp@316: \cdx{False} & $o$ & absurdity ($\bot$) lcp@104: \end{tabular} lcp@104: \end{center} lcp@104: \subcaption{Constants} lcp@104: lcp@104: \begin{center} lcp@104: \begin{tabular}{llrrr} lcp@316: \it symbol &\it name &\it meta-type & \it priority & \it description \\ lcp@316: \sdx{ALL} & \cdx{All} & $(\alpha\To o)\To o$ & 10 & lcp@111: universal quantifier ($\forall$) \\ lcp@316: \sdx{EX} & \cdx{Ex} & $(\alpha\To o)\To o$ & 10 & lcp@111: existential quantifier ($\exists$) \\ lcp@316: \sdx{THE} & \cdx{The} & $(\alpha\To o)\To \alpha$ & 10 & lcp@111: definite description ($\iota$) lcp@104: \end{tabular} lcp@104: \end{center} lcp@104: \subcaption{Binders} lcp@104: lcp@104: \begin{center} lcp@316: \index{*"= symbol} lcp@316: \index{&@{\tt\&} symbol} lcp@316: \index{*"| symbol} lcp@316: \index{*"-"-"> symbol} lcp@316: \index{*"<"-"> symbol} lcp@104: \begin{tabular}{rrrr} lcp@316: \it symbol & \it meta-type & \it priority & \it description \\ lcp@316: \tt = & $[\alpha,\alpha]\To o$ & Left 50 & equality ($=$) \\ lcp@104: \tt \& & $[o,o]\To o$ & Right 35 & conjunction ($\conj$) \\ lcp@104: \tt | & $[o,o]\To o$ & Right 30 & disjunction ($\disj$) \\ lcp@104: \tt --> & $[o,o]\To o$ & Right 25 & implication ($\imp$) \\ lcp@104: \tt <-> & $[o,o]\To o$ & Right 25 & biconditional ($\bimp$) lcp@104: \end{tabular} lcp@104: \end{center} lcp@104: \subcaption{Infixes} lcp@104: lcp@104: \begin{center} lcp@104: \begin{tabular}{rrr} lcp@111: \it external & \it internal & \it description \\ lcp@104: \tt $\Gamma$ |- $\Delta$ & \tt Trueprop($\Gamma$, $\Delta$) & lcp@104: sequent $\Gamma\turn \Delta$ lcp@104: \end{tabular} lcp@104: \end{center} lcp@104: \subcaption{Translations} lcp@104: \caption{Syntax of {\tt LK}} \label{lk-syntax} lcp@104: \end{figure} lcp@104: lcp@104: lcp@104: \begin{figure} lcp@104: \dquotes lcp@104: \[\begin{array}{rcl} lcp@104: prop & = & sequence " |- " sequence lcp@104: \\[2ex] lcp@104: sequence & = & elem \quad (", " elem)^* \\ lcp@104: & | & empty lcp@104: \\[2ex] paulson@7116: elem & = & "\$ " term \\ paulson@7116: & | & formula \\ paulson@7116: & | & "<<" sequence ">>" lcp@104: \\[2ex] lcp@104: formula & = & \hbox{expression of type~$o$} \\ lcp@111: & | & term " = " term \\ lcp@111: & | & "\ttilde\ " formula \\ lcp@111: & | & formula " \& " formula \\ lcp@111: & | & formula " | " formula \\ lcp@111: & | & formula " --> " formula \\ lcp@111: & | & formula " <-> " formula \\ lcp@111: & | & "ALL~" id~id^* " . " formula \\ lcp@111: & | & "EX~~" id~id^* " . " formula \\ lcp@111: & | & "THE~" id~ " . " formula lcp@104: \end{array} lcp@104: \] lcp@104: \caption{Grammar of {\tt LK}} \label{lk-grammar} lcp@104: \end{figure} lcp@104: lcp@104: lcp@316: lcp@316: lcp@104: \begin{figure} lcp@104: \begin{ttbox} lcp@316: \tdx{basic} $H, P, $G |- $E, P, $F paulson@7116: paulson@7116: \tdx{contRS} $H |- $E, $S, $S, $F ==> $H |- $E, $S, $F paulson@7116: \tdx{contLS} $H, $S, $S, $G |- $E ==> $H, $S, $G |- $E paulson@7116: paulson@7116: \tdx{thinRS} $H |- $E, $F ==> $H |- $E, $S, $F paulson@7116: \tdx{thinLS} $H, $G |- $E ==> $H, $S, $G |- $E paulson@7116: lcp@316: \tdx{cut} [| $H |- $E, P; $H, P |- $E |] ==> $H |- $E lcp@104: \subcaption{Structural rules} lcp@104: lcp@316: \tdx{refl} $H |- $E, a=a, $F paulson@7116: \tdx{subst} $H(a), $G(a) |- $E(a) ==> $H(b), a=b, $G(b) |- $E(b) lcp@104: \subcaption{Equality rules} paulson@7116: \end{ttbox} lcp@104: paulson@7116: \caption{Basic Rules of {\tt LK}} \label{lk-basic-rules} paulson@7116: \end{figure} paulson@7116: paulson@7116: \begin{figure} paulson@7116: \begin{ttbox} lcp@316: \tdx{True_def} True == False-->False lcp@316: \tdx{iff_def} P<->Q == (P-->Q) & (Q-->P) lcp@104: lcp@316: \tdx{conjR} [| $H|- $E, P, $F; $H|- $E, Q, $F |] ==> $H|- $E, P&Q, $F lcp@316: \tdx{conjL} $H, P, Q, $G |- $E ==> $H, P & Q, $G |- $E lcp@104: lcp@316: \tdx{disjR} $H |- $E, P, Q, $F ==> $H |- $E, P|Q, $F lcp@316: \tdx{disjL} [| $H, P, $G |- $E; $H, Q, $G |- $E |] ==> $H, P|Q, $G |- $E lcp@104: wenzelm@841: \tdx{impR} $H, P |- $E, Q, $F ==> $H |- $E, P-->Q, $F lcp@316: \tdx{impL} [| $H,$G |- $E,P; $H, Q, $G |- $E |] ==> $H, P-->Q, $G |- $E lcp@104: lcp@316: \tdx{notR} $H, P |- $E, $F ==> $H |- $E, ~P, $F lcp@316: \tdx{notL} $H, $G |- $E, P ==> $H, ~P, $G |- $E lcp@104: lcp@316: \tdx{FalseL} $H, False, $G |- $E lcp@316: paulson@5151: \tdx{allR} (!!x. $H|- $E, P(x), $F) ==> $H|- $E, ALL x. P(x), $F paulson@5151: \tdx{allL} $H, P(x), $G, ALL x. P(x) |- $E ==> $H, ALL x. P(x), $G|- $E lcp@316: paulson@5151: \tdx{exR} $H|- $E, P(x), $F, EX x. P(x) ==> $H|- $E, EX x. P(x), $F paulson@5151: \tdx{exL} (!!x. $H, P(x), $G|- $E) ==> $H, EX x. P(x), $G|- $E lcp@316: paulson@5151: \tdx{The} [| $H |- $E, P(a), $F; !!x. $H, P(x) |- $E, x=a, $F |] ==> paulson@5151: $H |- $E, P(THE x. P(x)), $F lcp@316: \subcaption{Logical rules} lcp@111: \end{ttbox} lcp@316: lcp@111: \caption{Rules of {\tt LK}} \label{lk-rules} lcp@111: \end{figure} lcp@104: lcp@104: lcp@104: \section{Syntax and rules of inference} lcp@316: \index{*sobj type} lcp@316: lcp@104: Figure~\ref{lk-syntax} gives the syntax for {\tt LK}, which is complicated lcp@104: by the representation of sequents. Type $sobj\To sobj$ represents a list lcp@104: of formulae. lcp@104: paulson@7116: The \textbf{definite description} operator~$\iota x. P[x]$ stands for some~$a$ wenzelm@9695: satisfying~$P[a]$, if one exists and is unique. Since all terms in LK denote wenzelm@9695: something, a description is always meaningful, but we do not know its value wenzelm@9695: unless $P[x]$ defines it uniquely. The Isabelle notation is \hbox{\tt THE wenzelm@9695: $x$.\ $P[x]$}. The corresponding rule (Fig.\ts\ref{lk-rules}) does not wenzelm@9695: entail the Axiom of Choice because it requires uniqueness. lcp@104: paulson@7116: Conditional expressions are available with the notation paulson@7116: \[ \dquotes paulson@7116: "if"~formula~"then"~term~"else"~term. \] paulson@7116: wenzelm@9695: Figure~\ref{lk-grammar} presents the grammar of LK. Traditionally, lcp@316: \(\Gamma\) and \(\Delta\) are meta-variables for sequences. In Isabelle's paulson@7116: notation, the prefix~\verb|$| on a term makes it range over sequences. lcp@316: In a sequent, anything not prefixed by \verb|$| is taken as a formula. lcp@104: paulson@7116: The notation \texttt{<<$sequence$>>} stands for a sequence of formul\ae{}. paulson@7116: For example, you can declare the constant \texttt{imps} to consist of two paulson@7116: implications: paulson@7116: \begin{ttbox} paulson@7116: consts P,Q,R :: o paulson@7116: constdefs imps :: seq'=>seq' paulson@7116: "imps == <

Q, Q --> R>>" paulson@7116: \end{ttbox} paulson@7116: Then you can use it in axioms and goals, for example paulson@7116: \begin{ttbox} paulson@7116: Goalw [imps_def] "P, $imps |- R"; paulson@7116: {\out Level 0} paulson@7116: {\out P, $imps |- R} paulson@7116: {\out 1. P, P --> Q, Q --> R |- R} paulson@7116: by (Fast_tac 1); paulson@7116: {\out Level 1} paulson@7116: {\out P, $imps |- R} paulson@7116: {\out No subgoals!} paulson@7116: \end{ttbox} paulson@7116: paulson@7116: Figures~\ref{lk-basic-rules} and~\ref{lk-rules} present the rules of theory paulson@7116: \thydx{LK}. The connective $\bimp$ is defined using $\conj$ and $\imp$. The paulson@7116: axiom for basic sequents is expressed in a form that provides automatic paulson@7116: thinning: redundant formulae are simply ignored. The other rules are paulson@7116: expressed in the form most suitable for backward proof; exchange and paulson@7116: contraction rules are not normally required, although they are provided paulson@7116: anyway. paulson@7116: paulson@7116: paulson@7116: \begin{figure} paulson@7116: \begin{ttbox} paulson@7116: \tdx{thinR} $H |- $E, $F ==> $H |- $E, P, $F paulson@7116: \tdx{thinL} $H, $G |- $E ==> $H, P, $G |- $E paulson@7116: paulson@7116: \tdx{contR} $H |- $E, P, P, $F ==> $H |- $E, P, $F paulson@7116: \tdx{contL} $H, P, P, $G |- $E ==> $H, P, $G |- $E paulson@7116: paulson@7116: \tdx{symR} $H |- $E, $F, a=b ==> $H |- $E, b=a, $F paulson@7116: \tdx{symL} $H, $G, b=a |- $E ==> $H, a=b, $G |- $E paulson@7116: paulson@7116: \tdx{transR} [| $H|- $E, $F, a=b; $H|- $E, $F, b=c |] paulson@7116: ==> $H|- $E, a=c, $F paulson@7116: paulson@7116: \tdx{TrueR} $H |- $E, True, $F paulson@7116: paulson@7116: \tdx{iffR} [| $H, P |- $E, Q, $F; $H, Q |- $E, P, $F |] paulson@7116: ==> $H |- $E, P<->Q, $F paulson@7116: paulson@7116: \tdx{iffL} [| $H, $G |- $E, P, Q; $H, Q, P, $G |- $E |] paulson@7116: ==> $H, P<->Q, $G |- $E paulson@7116: paulson@7116: \tdx{allL_thin} $H, P(x), $G |- $E ==> $H, ALL x. P(x), $G |- $E paulson@7116: \tdx{exR_thin} $H |- $E, P(x), $F ==> $H |- $E, EX x. P(x), $F paulson@7116: paulson@7116: \tdx{the_equality} [| $H |- $E, P(a), $F; paulson@7116: !!x. $H, P(x) |- $E, x=a, $F |] paulson@7116: ==> $H |- $E, (THE x. P(x)) = a, $F paulson@7116: \end{ttbox} paulson@7116: paulson@7116: \caption{Derived rules for {\tt LK}} \label{lk-derived} paulson@7116: \end{figure} lcp@104: lcp@104: Figure~\ref{lk-derived} presents derived rules, including rules for lcp@104: $\bimp$. The weakened quantifier rules discard each quantification after a lcp@104: single use; in an automatic proof procedure, they guarantee termination, lcp@104: but are incomplete. Multiple use of a quantifier can be obtained by a lcp@104: contraction rule, which in backward proof duplicates a formula. The tactic lcp@316: {\tt res_inst_tac} can instantiate the variable~{\tt?P} in these rules, lcp@104: specifying the formula to duplicate. paulson@7116: See theory {\tt Sequents/LK0} in the sources for complete listings of wenzelm@3148: the rules and derived rules. lcp@104: paulson@7116: To support the simplifier, hundreds of equivalences are proved for paulson@7116: the logical connectives and for if-then-else expressions. See the file paulson@7116: \texttt{Sequents/simpdata.ML}. lcp@104: paulson@7116: \section{Automatic Proof} lcp@316: wenzelm@9695: LK instantiates Isabelle's simplifier. Both equality ($=$) and the paulson@7116: biconditional ($\bimp$) may be used for rewriting. The tactic paulson@7116: \texttt{Simp_tac} refers to the default simpset (\texttt{simpset()}). With paulson@7116: sequents, the \texttt{full_} and \texttt{asm_} forms of the simplifier are not wenzelm@9695: required; all the formulae{} in the sequent will be simplified. The left-hand wenzelm@9695: formulae{} are taken as rewrite rules. (Thus, the behaviour is what you would wenzelm@9695: normally expect from calling \texttt{Asm_full_simp_tac}.) lcp@316: paulson@7116: For classical reasoning, several tactics are available: paulson@7116: \begin{ttbox} paulson@7116: Safe_tac : int -> tactic paulson@7116: Step_tac : int -> tactic paulson@7116: Fast_tac : int -> tactic paulson@7116: Best_tac : int -> tactic paulson@7116: Pc_tac : int -> tactic lcp@316: \end{ttbox} paulson@7116: These refer not to the standard classical reasoner but to a separate one paulson@7116: provided for the sequent calculus. Two commands are available for adding new paulson@7116: sequent calculus rules, safe or unsafe, to the default ``theorem pack'': paulson@7116: \begin{ttbox} paulson@7116: Add_safes : thm list -> unit paulson@7116: Add_unsafes : thm list -> unit paulson@7116: \end{ttbox} paulson@7116: To control the set of rules for individual invocations, lower-case versions of paulson@7116: all these primitives are available. Sections~\ref{sec:thm-pack} paulson@7116: and~\ref{sec:sequent-provers} give full details. lcp@316: lcp@316: lcp@104: \section{Tactics for the cut rule} paulson@7116: lcp@104: According to the cut-elimination theorem, the cut rule can be eliminated lcp@104: from proofs of sequents. But the rule is still essential. It can be used lcp@104: to structure a proof into lemmas, avoiding repeated proofs of the same paulson@19152: formula. More importantly, the cut rule cannot be eliminated from lcp@104: derivations of rules. For example, there is a trivial cut-free proof of lcp@104: the sequent \(P\conj Q\turn Q\conj P\). lcp@104: Noting this, we might want to derive a rule for swapping the conjuncts lcp@104: in a right-hand formula: lcp@104: \[ \Gamma\turn \Delta, P\conj Q\over \Gamma\turn \Delta, Q\conj P \] lcp@104: The cut rule must be used, for $P\conj Q$ is not a subformula of $Q\conj lcp@104: P$. Most cuts directly involve a premise of the rule being derived (a lcp@104: meta-assumption). In a few cases, the cut formula is not part of any lcp@104: premise, but serves as a bridge between the premises and the conclusion. lcp@104: In such proofs, the cut formula is specified by calling an appropriate lcp@104: tactic. lcp@104: lcp@104: \begin{ttbox} lcp@104: cutR_tac : string -> int -> tactic lcp@104: cutL_tac : string -> int -> tactic lcp@104: \end{ttbox} lcp@104: These tactics refine a subgoal into two by applying the cut rule. The cut lcp@104: formula is given as a string, and replaces some other formula in the sequent. lcp@316: \begin{ttdescription} wenzelm@9695: \item[\ttindexbold{cutR_tac} {\it P\/} {\it i}] reads an LK formula~$P$, and wenzelm@9695: applies the cut rule to subgoal~$i$. It then deletes some formula from the wenzelm@9695: right side of subgoal~$i$, replacing that formula by~$P$. wenzelm@9695: wenzelm@9695: \item[\ttindexbold{cutL_tac} {\it P\/} {\it i}] reads an LK formula~$P$, and wenzelm@9695: applies the cut rule to subgoal~$i$. It then deletes some formula from the wenzelm@9695: left side of the new subgoal $i+1$, replacing that formula by~$P$. lcp@316: \end{ttdescription} lcp@104: All the structural rules --- cut, contraction, and thinning --- can be lcp@316: applied to particular formulae using {\tt res_inst_tac}. lcp@104: lcp@104: lcp@104: \section{Tactics for sequents} lcp@104: \begin{ttbox} lcp@104: forms_of_seq : term -> term list lcp@104: could_res : term * term -> bool lcp@104: could_resolve_seq : term * term -> bool lcp@104: filseq_resolve_tac : thm list -> int -> int -> tactic lcp@104: \end{ttbox} lcp@104: Associative unification is not as efficient as it might be, in part because lcp@104: the representation of lists defeats some of Isabelle's internal lcp@333: optimisations. The following operations implement faster rule application, lcp@104: and may have other uses. lcp@316: \begin{ttdescription} lcp@104: \item[\ttindexbold{forms_of_seq} {\it t}] lcp@104: returns the list of all formulae in the sequent~$t$, removing sequence lcp@104: variables. lcp@104: lcp@316: \item[\ttindexbold{could_res} ($t$,$u$)] lcp@104: tests whether two formula lists could be resolved. List $t$ is from a lcp@316: premise or subgoal, while $u$ is from the conclusion of an object-rule. lcp@104: Assuming that each formula in $u$ is surrounded by sequence variables, it lcp@104: checks that each conclusion formula is unifiable (using {\tt could_unify}) lcp@104: with some subgoal formula. lcp@104: lcp@316: \item[\ttindexbold{could_resolve_seq} ($t$,$u$)] lcp@291: tests whether two sequents could be resolved. Sequent $t$ is a premise lcp@316: or subgoal, while $u$ is the conclusion of an object-rule. It simply lcp@291: calls {\tt could_res} twice to check that both the left and the right lcp@291: sides of the sequents are compatible. lcp@104: lcp@104: \item[\ttindexbold{filseq_resolve_tac} {\it thms} {\it maxr} {\it i}] lcp@104: uses {\tt filter_thms could_resolve} to extract the {\it thms} that are lcp@104: applicable to subgoal~$i$. If more than {\it maxr\/} theorems are lcp@104: applicable then the tactic fails. Otherwise it calls {\tt resolve_tac}. lcp@104: Thus, it is the sequent calculus analogue of \ttindex{filt_resolve_tac}. lcp@316: \end{ttdescription} lcp@104: lcp@104: lcp@104: \section{A simple example of classical reasoning} lcp@104: The theorem $\turn\ex{y}\all{x}P(y)\imp P(x)$ is a standard example of the wenzelm@9695: classical treatment of the existential quantifier. Classical reasoning is wenzelm@9695: easy using~LK, as you can see by comparing this proof with the one given in wenzelm@9695: the FOL manual~\cite{isabelle-ZF}. From a logical point of view, the proofs wenzelm@9695: are essentially the same; the key step here is to use \tdx{exR} rather than wenzelm@9695: the weaker~\tdx{exR_thin}. lcp@104: \begin{ttbox} paulson@5151: Goal "|- EX y. ALL x. P(y)-->P(x)"; lcp@104: {\out Level 0} lcp@104: {\out |- EX y. ALL x. P(y) --> P(x)} lcp@104: {\out 1. |- EX y. ALL x. P(y) --> P(x)} lcp@104: by (resolve_tac [exR] 1); lcp@104: {\out Level 1} lcp@104: {\out |- EX y. ALL x. P(y) --> P(x)} lcp@104: {\out 1. |- ALL x. P(?x) --> P(x), EX x. ALL xa. P(x) --> P(xa)} lcp@104: \end{ttbox} lcp@104: There are now two formulae on the right side. Keeping the existential one lcp@104: in reserve, we break down the universal one. lcp@104: \begin{ttbox} lcp@104: by (resolve_tac [allR] 1); lcp@104: {\out Level 2} lcp@104: {\out |- EX y. ALL x. P(y) --> P(x)} lcp@104: {\out 1. !!x. |- P(?x) --> P(x), EX x. ALL xa. P(x) --> P(xa)} lcp@104: by (resolve_tac [impR] 1); lcp@104: {\out Level 3} lcp@104: {\out |- EX y. ALL x. P(y) --> P(x)} lcp@104: {\out 1. !!x. P(?x) |- P(x), EX x. ALL xa. P(x) --> P(xa)} lcp@104: \end{ttbox} wenzelm@9695: Because LK is a sequent calculus, the formula~$P(\Var{x})$ does not become an wenzelm@9695: assumption; instead, it moves to the left side. The resulting subgoal cannot wenzelm@9695: be instantiated to a basic sequent: the bound variable~$x$ is not unifiable wenzelm@9695: with the unknown~$\Var{x}$. lcp@104: \begin{ttbox} lcp@104: by (resolve_tac [basic] 1); lcp@104: {\out by: tactic failed} lcp@104: \end{ttbox} lcp@316: We reuse the existential formula using~\tdx{exR_thin}, which discards lcp@316: it; we shall not need it a third time. We again break down the resulting lcp@104: formula. lcp@104: \begin{ttbox} lcp@104: by (resolve_tac [exR_thin] 1); lcp@104: {\out Level 4} lcp@104: {\out |- EX y. ALL x. P(y) --> P(x)} lcp@104: {\out 1. !!x. P(?x) |- P(x), ALL xa. P(?x7(x)) --> P(xa)} lcp@104: by (resolve_tac [allR] 1); lcp@104: {\out Level 5} lcp@104: {\out |- EX y. ALL x. P(y) --> P(x)} lcp@104: {\out 1. !!x xa. P(?x) |- P(x), P(?x7(x)) --> P(xa)} lcp@104: by (resolve_tac [impR] 1); lcp@104: {\out Level 6} lcp@104: {\out |- EX y. ALL x. P(y) --> P(x)} lcp@104: {\out 1. !!x xa. P(?x), P(?x7(x)) |- P(x), P(xa)} lcp@104: \end{ttbox} lcp@104: Subgoal~1 seems to offer lots of possibilities. Actually the only useful paulson@5151: step is instantiating~$\Var{x@7}$ to $\lambda x. x$, lcp@104: transforming~$\Var{x@7}(x)$ into~$x$. lcp@104: \begin{ttbox} lcp@104: by (resolve_tac [basic] 1); lcp@104: {\out Level 7} lcp@104: {\out |- EX y. ALL x. P(y) --> P(x)} lcp@104: {\out No subgoals!} lcp@104: \end{ttbox} lcp@104: This theorem can be proved automatically. Because it involves quantifier lcp@104: duplication, we employ best-first search: lcp@104: \begin{ttbox} paulson@5151: Goal "|- EX y. ALL x. P(y)-->P(x)"; lcp@104: {\out Level 0} lcp@104: {\out |- EX y. ALL x. P(y) --> P(x)} lcp@104: {\out 1. |- EX y. ALL x. P(y) --> P(x)} lcp@104: by (best_tac LK_dup_pack 1); lcp@104: {\out Level 1} lcp@104: {\out |- EX y. ALL x. P(y) --> P(x)} lcp@104: {\out No subgoals!} lcp@104: \end{ttbox} lcp@104: lcp@104: lcp@104: lcp@104: \section{A more complex proof} lcp@104: Many of Pelletier's test problems for theorem provers \cite{pelletier86} lcp@104: can be solved automatically. Problem~39 concerns set theory, asserting lcp@104: that there is no Russell set --- a set consisting of those sets that are lcp@104: not members of themselves: lcp@104: \[ \turn \neg (\exists x. \forall y. y\in x \bimp y\not\in y) \] paulson@7116: This does not require special properties of membership; we may generalize paulson@7116: $x\in y$ to an arbitrary predicate~$F(x,y)$. The theorem, which is trivial paulson@7116: for \texttt{Fast_tac}, has a short manual proof. See the directory {\tt paulson@7116: Sequents/LK} for many more examples. lcp@104: lcp@104: We set the main goal and move the negated formula to the left. lcp@104: \begin{ttbox} paulson@5151: Goal "|- ~ (EX x. ALL y. F(y,x) <-> ~F(y,y))"; lcp@104: {\out Level 0} lcp@104: {\out |- ~ (EX x. ALL y. F(y,x) <-> ~ F(y,y))} lcp@104: {\out 1. |- ~ (EX x. ALL y. F(y,x) <-> ~ F(y,y))} lcp@104: by (resolve_tac [notR] 1); lcp@104: {\out Level 1} lcp@104: {\out |- ~ (EX x. ALL y. F(y,x) <-> ~ F(y,y))} lcp@104: {\out 1. EX x. ALL y. F(y,x) <-> ~ F(y,y) |-} lcp@104: \end{ttbox} lcp@104: The right side is empty; we strip both quantifiers from the formula on the lcp@104: left. lcp@104: \begin{ttbox} lcp@316: by (resolve_tac [exL] 1); lcp@104: {\out Level 2} lcp@104: {\out |- ~ (EX x. ALL y. F(y,x) <-> ~ F(y,y))} lcp@104: {\out 1. !!x. ALL y. F(y,x) <-> ~ F(y,y) |-} lcp@104: by (resolve_tac [allL_thin] 1); lcp@104: {\out Level 3} lcp@104: {\out |- ~ (EX x. ALL y. F(y,x) <-> ~ F(y,y))} lcp@104: {\out 1. !!x. F(?x2(x),x) <-> ~ F(?x2(x),?x2(x)) |-} lcp@104: \end{ttbox} lcp@316: The rule \tdx{iffL} says, if $P\bimp Q$ then $P$ and~$Q$ are either lcp@104: both true or both false. It yields two subgoals. lcp@104: \begin{ttbox} lcp@104: by (resolve_tac [iffL] 1); lcp@104: {\out Level 4} lcp@104: {\out |- ~ (EX x. ALL y. F(y,x) <-> ~ F(y,y))} lcp@104: {\out 1. !!x. |- F(?x2(x),x), ~ F(?x2(x),?x2(x))} lcp@104: {\out 2. !!x. ~ F(?x2(x),?x2(x)), F(?x2(x),x) |-} lcp@104: \end{ttbox} lcp@104: We must instantiate~$\Var{x@2}$, the shared unknown, to satisfy both lcp@104: subgoals. Beginning with subgoal~2, we move a negated formula to the left lcp@104: and create a basic sequent. lcp@104: \begin{ttbox} lcp@104: by (resolve_tac [notL] 2); lcp@104: {\out Level 5} lcp@104: {\out |- ~ (EX x. ALL y. F(y,x) <-> ~ F(y,y))} lcp@104: {\out 1. !!x. |- F(?x2(x),x), ~ F(?x2(x),?x2(x))} lcp@104: {\out 2. !!x. F(?x2(x),x) |- F(?x2(x),?x2(x))} lcp@104: by (resolve_tac [basic] 2); lcp@104: {\out Level 6} lcp@104: {\out |- ~ (EX x. ALL y. F(y,x) <-> ~ F(y,y))} lcp@104: {\out 1. !!x. |- F(x,x), ~ F(x,x)} lcp@104: \end{ttbox} lcp@104: Thanks to the instantiation of~$\Var{x@2}$, subgoal~1 is obviously true. lcp@104: \begin{ttbox} lcp@104: by (resolve_tac [notR] 1); lcp@104: {\out Level 7} lcp@104: {\out |- ~ (EX x. ALL y. F(y,x) <-> ~ F(y,y))} lcp@104: {\out 1. !!x. F(x,x) |- F(x,x)} lcp@104: by (resolve_tac [basic] 1); lcp@104: {\out Level 8} lcp@104: {\out |- ~ (EX x. ALL y. F(y,x) <-> ~ F(y,y))} lcp@104: {\out No subgoals!} lcp@104: \end{ttbox} lcp@316: paulson@7116: \section{*Unification for lists}\label{sec:assoc-unification} paulson@7116: paulson@7116: Higher-order unification includes associative unification as a special paulson@7116: case, by an encoding that involves function composition paulson@7116: \cite[page~37]{huet78}. To represent lists, let $C$ be a new constant. paulson@7116: The empty list is $\lambda x. x$, while $[t@1,t@2,\ldots,t@n]$ is paulson@7116: represented by paulson@7116: \[ \lambda x. C(t@1,C(t@2,\ldots,C(t@n,x))). \] paulson@7116: The unifiers of this with $\lambda x.\Var{f}(\Var{g}(x))$ give all the ways paulson@7116: of expressing $[t@1,t@2,\ldots,t@n]$ as the concatenation of two lists. paulson@7116: paulson@7116: Unlike orthodox associative unification, this technique can represent certain paulson@7116: infinite sets of unifiers by flex-flex equations. But note that the term paulson@7116: $\lambda x. C(t,\Var{a})$ does not represent any list. Flex-flex constraints paulson@7116: containing such garbage terms may accumulate during a proof. paulson@7116: \index{flex-flex constraints} paulson@7116: paulson@7116: This technique lets Isabelle formalize sequent calculus rules, paulson@7116: where the comma is the associative operator: paulson@7116: \[ \infer[(\conj\hbox{-left})] paulson@7116: {\Gamma,P\conj Q,\Delta \turn \Theta} paulson@7116: {\Gamma,P,Q,\Delta \turn \Theta} \] paulson@7116: Multiple unifiers occur whenever this is resolved against a goal containing paulson@7116: more than one conjunction on the left. paulson@7116: wenzelm@9695: LK exploits this representation of lists. As an alternative, the sequent wenzelm@9695: calculus can be formalized using an ordinary representation of lists, with a wenzelm@9695: logic program for removing a formula from a list. Amy Felty has applied this wenzelm@9695: technique using the language $\lambda$Prolog~\cite{felty91a}. paulson@7116: paulson@7116: Explicit formalization of sequents can be tiresome. But it gives precise paulson@7116: control over contraction and weakening, and is essential to handle relevant paulson@7116: and linear logics. paulson@7116: paulson@7116: paulson@7116: \section{*Packaging sequent rules}\label{sec:thm-pack} paulson@7116: paulson@7116: The sequent calculi come with simple proof procedures. These are incomplete paulson@7116: but are reasonably powerful for interactive use. They expect rules to be paulson@7116: classified as \textbf{safe} or \textbf{unsafe}. A rule is safe if applying it to a paulson@7116: provable goal always yields provable subgoals. If a rule is safe then it can paulson@7116: be applied automatically to a goal without destroying our chances of finding a paulson@7116: proof. For instance, all the standard rules of the classical sequent calculus paulson@7116: {\sc lk} are safe. An unsafe rule may render the goal unprovable; typical paulson@7116: examples are the weakened quantifier rules {\tt allL_thin} and {\tt exR_thin}. paulson@7116: paulson@7116: Proof procedures use safe rules whenever possible, using an unsafe rule as a paulson@7116: last resort. Those safe rules are preferred that generate the fewest paulson@7116: subgoals. Safe rules are (by definition) deterministic, while the unsafe paulson@7116: rules require a search strategy, such as backtracking. paulson@7116: paulson@7116: A \textbf{pack} is a pair whose first component is a list of safe rules and wenzelm@9695: whose second is a list of unsafe rules. Packs can be extended in an obvious wenzelm@9695: way to allow reasoning with various collections of rules. For clarity, LK wenzelm@9695: declares \mltydx{pack} as an \ML{} datatype, although is essentially a type wenzelm@9695: synonym: paulson@7116: \begin{ttbox} paulson@7116: datatype pack = Pack of thm list * thm list; paulson@7116: \end{ttbox} paulson@7116: Pattern-matching using constructor {\tt Pack} can inspect a pack's paulson@7116: contents. Packs support the following operations: paulson@7116: \begin{ttbox} paulson@7116: pack : unit -> pack paulson@7116: pack_of : theory -> pack paulson@7116: empty_pack : pack paulson@7116: prop_pack : pack paulson@7116: LK_pack : pack paulson@7116: LK_dup_pack : pack paulson@7116: add_safes : pack * thm list -> pack \hfill\textbf{infix 4} paulson@7116: add_unsafes : pack * thm list -> pack \hfill\textbf{infix 4} paulson@7116: \end{ttbox} paulson@7116: \begin{ttdescription} paulson@7116: \item[\ttindexbold{pack}] returns the pack attached to the current theory. paulson@7116: paulson@7116: \item[\ttindexbold{pack_of $thy$}] returns the pack attached to theory $thy$. paulson@7116: paulson@7116: \item[\ttindexbold{empty_pack}] is the empty pack. paulson@7116: paulson@7116: \item[\ttindexbold{prop_pack}] contains the propositional rules, namely paulson@7116: those for $\conj$, $\disj$, $\neg$, $\imp$ and~$\bimp$, along with the paulson@7116: rules {\tt basic} and {\tt refl}. These are all safe. paulson@7116: paulson@7116: \item[\ttindexbold{LK_pack}] paulson@7116: extends {\tt prop_pack} with the safe rules {\tt allR} paulson@7116: and~{\tt exL} and the unsafe rules {\tt allL_thin} and paulson@7116: {\tt exR_thin}. Search using this is incomplete since quantified paulson@7116: formulae are used at most once. paulson@7116: paulson@7116: \item[\ttindexbold{LK_dup_pack}] paulson@7116: extends {\tt prop_pack} with the safe rules {\tt allR} paulson@7116: and~{\tt exL} and the unsafe rules \tdx{allL} and~\tdx{exR}. paulson@7116: Search using this is complete, since quantified formulae may be reused, but paulson@7116: frequently fails to terminate. It is generally unsuitable for depth-first paulson@7116: search. paulson@7116: paulson@7116: \item[$pack$ \ttindexbold{add_safes} $rules$] paulson@7116: adds some safe~$rules$ to the pack~$pack$. paulson@7116: paulson@7116: \item[$pack$ \ttindexbold{add_unsafes} $rules$] paulson@7116: adds some unsafe~$rules$ to the pack~$pack$. paulson@7116: \end{ttdescription} paulson@7116: paulson@7116: paulson@7116: \section{*Proof procedures}\label{sec:sequent-provers} paulson@7116: wenzelm@9695: The LK proof procedure is similar to the classical reasoner described in paulson@7116: \iflabelundefined{chap:classical}{the {\em Reference Manual\/}}% paulson@7116: {Chap.\ts\ref{chap:classical}}. paulson@7116: % paulson@7116: In fact it is simpler, since it works directly with sequents rather than paulson@7116: simulating them. There is no need to distinguish introduction rules from paulson@7116: elimination rules, and of course there is no swap rule. As always, paulson@7116: Isabelle's classical proof procedures are less powerful than resolution paulson@7116: theorem provers. But they are more natural and flexible, working with an paulson@7116: open-ended set of rules. paulson@7116: paulson@7116: Backtracking over the choice of a safe rule accomplishes nothing: applying paulson@7116: them in any order leads to essentially the same result. Backtracking may paulson@7116: be necessary over basic sequents when they perform unification. Suppose paulson@7116: that~0, 1, 2,~3 are constants in the subgoals paulson@7116: \[ \begin{array}{c} paulson@7116: P(0), P(1), P(2) \turn P(\Var{a}) \\ paulson@7116: P(0), P(2), P(3) \turn P(\Var{a}) \\ paulson@7116: P(1), P(3), P(2) \turn P(\Var{a}) paulson@7116: \end{array} paulson@7116: \] paulson@7116: The only assignment that satisfies all three subgoals is $\Var{a}\mapsto 2$, paulson@7116: and this can only be discovered by search. The tactics given below permit paulson@7116: backtracking only over axioms, such as {\tt basic} and {\tt refl}; paulson@7116: otherwise they are deterministic. paulson@7116: paulson@7116: paulson@7116: \subsection{Method A} paulson@7116: \begin{ttbox} paulson@7116: reresolve_tac : thm list -> int -> tactic paulson@7116: repeat_goal_tac : pack -> int -> tactic paulson@7116: pc_tac : pack -> int -> tactic paulson@7116: \end{ttbox} paulson@7116: These tactics use a method developed by Philippe de Groote. A subgoal is paulson@7116: refined and the resulting subgoals are attempted in reverse order. For paulson@7116: some reason, this is much faster than attempting the subgoals in order. paulson@7116: The method is inherently depth-first. paulson@7116: paulson@7116: At present, these tactics only work for rules that have no more than two paulson@7116: premises. They fail --- return no next state --- if they can do nothing. paulson@7116: \begin{ttdescription} paulson@7116: \item[\ttindexbold{reresolve_tac} $thms$ $i$] paulson@7116: repeatedly applies the $thms$ to subgoal $i$ and the resulting subgoals. paulson@7116: paulson@7116: \item[\ttindexbold{repeat_goal_tac} $pack$ $i$] paulson@7116: applies the safe rules in the pack to a goal and the resulting subgoals. paulson@7116: If no safe rule is applicable then it applies an unsafe rule and continues. paulson@7116: paulson@7116: \item[\ttindexbold{pc_tac} $pack$ $i$] paulson@7116: applies {\tt repeat_goal_tac} using depth-first search to solve subgoal~$i$. paulson@7116: \end{ttdescription} paulson@7116: paulson@7116: paulson@7116: \subsection{Method B} paulson@7116: \begin{ttbox} paulson@7116: safe_tac : pack -> int -> tactic paulson@7116: step_tac : pack -> int -> tactic paulson@7116: fast_tac : pack -> int -> tactic paulson@7116: best_tac : pack -> int -> tactic paulson@7116: \end{ttbox} paulson@7116: These tactics are analogous to those of the generic classical paulson@7116: reasoner. They use `Method~A' only on safe rules. They fail if they paulson@7116: can do nothing. paulson@7116: \begin{ttdescription} paulson@7116: \item[\ttindexbold{safe_goal_tac} $pack$ $i$] paulson@7116: applies the safe rules in the pack to a goal and the resulting subgoals. paulson@7116: It ignores the unsafe rules. paulson@7116: paulson@7116: \item[\ttindexbold{step_tac} $pack$ $i$] paulson@7116: either applies safe rules (using {\tt safe_goal_tac}) or applies one unsafe paulson@7116: rule. paulson@7116: paulson@7116: \item[\ttindexbold{fast_tac} $pack$ $i$] paulson@7116: applies {\tt step_tac} using depth-first search to solve subgoal~$i$. paulson@7116: Despite its name, it is frequently slower than {\tt pc_tac}. paulson@7116: paulson@7116: \item[\ttindexbold{best_tac} $pack$ $i$] paulson@7116: applies {\tt step_tac} using best-first search to solve subgoal~$i$. It is paulson@7116: particularly useful for quantifier duplication (using \ttindex{LK_dup_pack}). paulson@7116: \end{ttdescription} paulson@7116: paulson@7116: paulson@7116: lcp@316: \index{sequent calculus|)}