doc-src/Ref/tctical.tex
changeset 332 01b87a921967
parent 323 361a71713176
child 1118 93ba05d8ccdc
     1.1 --- a/doc-src/Ref/tctical.tex	Fri Apr 22 18:08:57 1994 +0200
     1.2 +++ b/doc-src/Ref/tctical.tex	Fri Apr 22 18:18:37 1994 +0200
     1.3 @@ -139,7 +139,7 @@
     1.4  Note {\tt REPEAT}'s explicit abstraction over the proof state.  Recursive
     1.5  tacticals must be coded in this awkward fashion to avoid infinite
     1.6  recursion.  With the following definition, \hbox{\tt REPEAT $tac$} would
     1.7 -loop:
     1.8 +loop due to \ML's eager evaluation strategy:
     1.9  \begin{ttbox} 
    1.10  fun REPEAT tac = (tac THEN REPEAT tac) ORELSE all_tac;
    1.11  \end{ttbox}
    1.12 @@ -185,8 +185,8 @@
    1.13  \index{tracing!of searching tacticals}
    1.14  \begin{ttbox} 
    1.15  DEPTH_FIRST   : (thm->bool) -> tactic -> tactic
    1.16 -DEPTH_SOLVE   : tactic -> tactic
    1.17 -DEPTH_SOLVE_1 : tactic -> tactic
    1.18 +DEPTH_SOLVE   :                tactic -> tactic
    1.19 +DEPTH_SOLVE_1 :                tactic -> tactic
    1.20  trace_DEPTH_FIRST: bool ref \hfill{\bf initially false}
    1.21  \end{ttbox}
    1.22  \begin{ttdescription}
    1.23 @@ -213,7 +213,7 @@
    1.24  \index{tacticals!searching}
    1.25  \index{tracing!of searching tacticals}
    1.26  \begin{ttbox} 
    1.27 -BREADTH_FIRST   : (thm->bool) -> tactic -> tactic
    1.28 +BREADTH_FIRST   :            (thm->bool) -> tactic -> tactic
    1.29  BEST_FIRST      : (thm->bool)*(thm->int) -> tactic -> tactic
    1.30  THEN_BEST_FIRST : tactic * ((thm->bool) * (thm->int) * tactic)
    1.31                    -> tactic                    \hfill{\bf infix 1}
    1.32 @@ -330,11 +330,11 @@
    1.33  \hbox{\tt SELECT_GOAL {\it tac} $i$}.
    1.34  
    1.35  {\tt SELECT_GOAL} works by creating a state of the form $\phi\Imp\phi$,
    1.36 -with the one subgoal~$\phi$.  If subgoal~$i$ has the form then
    1.37 -$(\psi\Imp\theta)\Imp(\psi\Imp\theta)$ is in fact $\List{\psi\Imp\theta;\;
    1.38 -  \psi}\Imp\theta$, a proof state with two subgoals.  Such a proof state
    1.39 -might cause tactics to go astray.  Therefore {\tt SELECT_GOAL} inserts a
    1.40 -quantifier to create the state
    1.41 +with the one subgoal~$\phi$.  If subgoal~$i$ has the form $\psi\Imp\theta$
    1.42 +then $(\psi\Imp\theta)\Imp(\psi\Imp\theta)$ is in fact
    1.43 +$\List{\psi\Imp\theta;\; \psi}\Imp\theta$, a proof state with two subgoals.
    1.44 +Such a proof state might cause tactics to go astray.  Therefore {\tt
    1.45 +  SELECT_GOAL} inserts a quantifier to create the state
    1.46  \[ (\Forall x.\psi\Imp\theta)\Imp(\Forall x.\psi\Imp\theta). \]
    1.47  
    1.48  \item[\ttindexbold{METAHYPS} {\it tacf} $i$]\index{meta-assumptions}
    1.49 @@ -362,6 +362,8 @@
    1.50  \begin{ttbox} 
    1.51  val hyp_res_tac = METAHYPS (fn prems => resolve_tac prems 1);
    1.52  \end{ttbox} 
    1.53 +The function \ttindex{gethyps} is useful for debugging applications of {\tt
    1.54 +  METAHYPS}. 
    1.55  \end{ttdescription}
    1.56  
    1.57  \begin{warn}