mlehnfeld thesis updated to final version
authorMathias Lehnfeld <s1210629013@students.fh-hagenberg.at>
Thu, 26 Jun 2014 17:19:30 +0200
changeset 5546655c2d2ee3f92
parent 55465 eaa7d67b78d0
child 55467 2e9db65faf65
mlehnfeld thesis updated to final version
doc-isac/mlehnfeld/master/thesis/LICENSE
doc-isac/mlehnfeld/master/thesis/abstract.tex
doc-isac/mlehnfeld/master/thesis/appendix_a.tex
doc-isac/mlehnfeld/master/thesis/appendix_b.tex
doc-isac/mlehnfeld/master/thesis/conclusion.tex
doc-isac/mlehnfeld/master/thesis/data/thinkpad.dat
doc-isac/mlehnfeld/master/thesis/data/thinkpad_seq.dat
doc-isac/mlehnfeld/master/thesis/data/thinkpadavg.dat
doc-isac/mlehnfeld/master/thesis/data/thinkpadavg_seq.dat
doc-isac/mlehnfeld/master/thesis/declaration.tex
doc-isac/mlehnfeld/master/thesis/fundamentals.tex
doc-isac/mlehnfeld/master/thesis/funproglangs_mcsystems.tex
doc-isac/mlehnfeld/master/thesis/hgb.sty
doc-isac/mlehnfeld/master/thesis/images/QED_Editor.png
doc-isac/mlehnfeld/master/thesis/images/jedit_feedback.png
doc-isac/mlehnfeld/master/thesis/images/next_btn.png
doc-isac/mlehnfeld/master/thesis/images/sledgehammer.png
doc-isac/mlehnfeld/master/thesis/images/theories_processing.png
doc-isac/mlehnfeld/master/thesis/introduction.tex
doc-isac/mlehnfeld/master/thesis/isabelle_isac.tex
doc-isac/mlehnfeld/master/thesis/kurzfassung.tex
doc-isac/mlehnfeld/master/thesis/literature.bib
doc-isac/mlehnfeld/master/thesis/preface.tex
doc-isac/mlehnfeld/master/thesis/thesis.tex
     1.1 --- a/doc-isac/mlehnfeld/master/thesis/LICENSE	Thu Jun 26 10:50:27 2014 +0200
     1.2 +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.3 @@ -1,1 +0,0 @@
     1.4 -The documents contained in this subfolder are published under the conditions of the Creative Commons License Attribution–NonCommercial–NoDerivatives (CC BY-NC-ND) — see http://creativecommons.org/licenses/by-nc-nd/3.0/.
     1.5 \ No newline at end of file
     2.1 --- a/doc-isac/mlehnfeld/master/thesis/abstract.tex	Thu Jun 26 10:50:27 2014 +0200
     2.2 +++ b/doc-isac/mlehnfeld/master/thesis/abstract.tex	Thu Jun 26 17:19:30 2014 +0200
     2.3 @@ -1,4 +1,8 @@
     2.4  \chapter{Abstract}
     2.5  \label{cha:abstract}
     2.6  
     2.7 -english abstract here
     2.8 +Multi-core hardware has become a standard in personal computing. Similar to the {\em software crisis} around 1970, writing software which is able to exploit the new, parallel hardware is still a difficult, error-prone process.
     2.9 +
    2.10 +Due to a side effect free programming style, functional programming languages are said to be specifically suitable for this kind of tasks. One application of such languages are theorem provers. Their foundation in mathematics and logics makes functional programming the paradigm of choice. {\em Isabelle} is an interactive theorem proving framework whose back-end has been implemented in {\em Standard ML}. Development efforts during recent years have enabled the system to efficiently use the full processing power available on current multi-core architectures. The educational mathematics system \isac{} draws on {\em Isabelle}'s functionality and was thus until recently the only project of its kind to be based on theorem proving technology.
    2.11 +
    2.12 +This thesis documents the introduction of {\em Isabelle}'s parallelism concepts to \isac{} and outlines related technologies such as parallelism fundamentals, various approaches to parallelism in functional programming and computer theorem proving. Furthermore, the architectures and concepts of {\em Isabelle} and \isac{} are discussed, both including means of communicating with the {\em Java Virtual Machine} for their front-ends. Thereby the strengths of functional and imperative programming are combined and both paradigms are utilized to solve the respective problems for which they are particularly appropriate. The results of the parallelization are promising and will enable a high number of students to use one single, responsive \isac{} server for calculations simultaneously.
     3.1 --- a/doc-isac/mlehnfeld/master/thesis/appendix_a.tex	Thu Jun 26 10:50:27 2014 +0200
     3.2 +++ b/doc-isac/mlehnfeld/master/thesis/appendix_a.tex	Thu Jun 26 17:19:30 2014 +0200
     3.3 @@ -0,0 +1,101 @@
     3.4 +\chapter{Code Samples and Comments}
     3.5 +\label{app:code}
     3.6 +
     3.7 +\section{Fibonacci Implementations}
     3.8 +\label{sec:fib-imp}
     3.9 +The Fibonacci numbers are an integer sequence given by
    3.10 +\begin{equation}
    3.11 +  F_n = F_{n-1} + F_{n-2}
    3.12 +\end{equation}
    3.13 +and
    3.14 +\begin{equation}
    3.15 +  F_0 = 0, F_1 = 1\;.
    3.16 +\end{equation}
    3.17 +Because this is a very simple, recursive definition, it was chosen for the demonstration of certain concepts in this thesis. However, it is very important to note that the straight forward implementation
    3.18 +\imlcode{~~\=fu\=n fib 0 = 0\\
    3.19 +\>\>| fib 1 = 1\\
    3.20 +\>\>| fib x = fib (x - 1) + fib (x - 2)\textrm{,}}
    3.21 +\noindent
    3.22 +which we presented as an example for pattern matching in \textit{Standard ML} (page \pageref{alg:patmatch}, section \ref{sec:patmatch}), is very inefficient because it has a runtime behavior of $O(F_n)$. An efficient version of the function,
    3.23 +\imlcode{~~\=fu\=n \=fi\=b' 0 = (0, 1)\\
    3.24 +\>\>| fib' n = let\\
    3.25 +\>\>\>\>val (x1, x2) = fib' (n - 1)\\
    3.26 +\>\>\>in\\
    3.27 +\>\>\>\>(x2, x1 + x2)\\
    3.28 +\>\>\>end;\\
    3.29 +\>(*gets 2nd element of touple*)\\
    3.30 +\>fun fib n = fib' (n - 1) |> snd \textrm{,}}
    3.31 +\noindent
    3.32 +shows linear runtime behavior. Annotations (section \ref{sec:annot}) were demonstrated using the Fibonacci function (page \pageref{alg:parannot}):
    3.33 +\imlcode{~~\=fu\=n fib 0 = 1\\
    3.34 +\>\>| fib 1 = 1\\
    3.35 +\>\>| fib x = fib (x - 1) + par (fib (x - 2))}
    3.36 +\noindent
    3.37 +In practice, the gain from parallelism in this example would not just be marginal, the overhead for managing parallel execution would most likely result in a performance worse than the sequential version. Note that annotations of this kind can only work in a programming language following a lazy evaluation strategy because in an eager language like \textit{Standard ML} the annotated expression would be evaluated before being passed to the hypothetical {\tt par} evaluation.
    3.38 +Futures have been discussed in detail throughout this thesis (sections \ref{sec:futurespro}, \ref{sec:actors_scala}, \ref{sec:csp}, \ref{sec:isafut}). The example on page \pageref{alg:futures} redefined {\tt fib} with a future:
    3.39 +\imlcode{\label{alg:fib-futures}
    3.40 +~~\=fu\=n \=fi\=b \=0 = 1\\
    3.41 +\>\>| fib 1 = 1\\
    3.42 +\>\>| fib x = let\\
    3.43 +\>\>\>\>\>fun fib' () = fib (x - 2)\\
    3.44 +\>\>\>\>\>val fibf = Future.fork fib'\\
    3.45 +\>\>\>\>in fib (x - 1) + (Future.join fibf) end}
    3.46 +This code does work in {\em Isabelle/ML} (section \ref{sec:isabelleml}). The performance concerns we saw with the previous example also apply here: in practice, parallelising the evaluation of the Fibonacci function in this way makes no sense.
    3.47 +
    3.48 +
    3.49 +\section{{\tt merge\_lists} implementation}
    3.50 +\label{app:merge-lists}
    3.51 +In the \texttt{Theory\_Data} implementation on page \pageref{alg:thydata-functor} (section \ref{ssec:parall-thy}) we omitted the definitions of the functions {\tt merge\_lists} and {\tt merge\_lists'} for simplicity reasons. They have been added here (program \ref{alg:merge-lists}). Please note that they require that their input lists of type {\tt int list} are already ordered. The difference is that {\tt merge\_lists} accepts the two lists in a touple and {\tt merge\_lists'} as two separate arguments.
    3.52 +\begin{program}
    3.53 +\caption{{\tt merge\_lists} implementation.}
    3.54 +\label{alg:merge-lists}
    3.55 +\begin{MLCode}
    3.56 +fun merge out [] xs = (rev out) @ xs
    3.57 +  | merge out xs [] = (rev out) @ xs
    3.58 +  | merge out (xs' as x::xs) (ys' as y::ys) =
    3.59 +      if x = y then
    3.60 +        merge (x::out) xs ys
    3.61 +      else if x < y then
    3.62 +        merge (x::out) xs ys'
    3.63 +      else
    3.64 +        merge (y::out) xs' ys;
    3.65 +val merge_lists' = merge [];
    3.66 +fun merge_lists (xs, ys) = merge_lists' xs ys;
    3.67 +\end{MLCode}
    3.68 +\end{program}
    3.69 +
    3.70 +
    3.71 +\section{Irrationality of \texorpdfstring{$\sqrt{2}$}{the square root of 2} in \textit{Isar}}
    3.72 +\label{app:sqrt2-isar}
    3.73 +
    3.74 +Program \ref{alg:isar-sqrt2} is a simple proof in \textit{Isar}, showing that the square root of 2 is an irrational number\footnote{\tt \textasciitilde\textasciitilde/src/HOL/ex/Sqrt.thy}.
    3.75 +
    3.76 +
    3.77 +\begin{program}
    3.78 +\caption{\textit{Isabelle/Isar} proof of the irrationality of $\sqrt{2}$.}
    3.79 +\label{alg:isar-sqrt2}
    3.80 +\vspace{\medskipamount}
    3.81 +\noindent
    3.82 +\colorbox{color5}{\texttt{\small
    3.83 +\begin{minipage}{.98\textwidth}
    3.84 +\setlength{\parindent}{1pc}
    3.85 +\noindent
    3.86 +{\tiny\ttfamily\color{color6} 1~}~{\bfseries\color{color1} lemma} {\color{color3} "$\exists$a b::real. a $\notin$ $\mathbb{Q}$ $\land$ b $\notin$ $\mathbb{Q}$ $\land$ a powr b $\in$ $\mathbb{Q}$"}\\
    3.87 +{\tiny\ttfamily\color{color6} 2~}~{\indent({\bfseries\color{color1} is} {\color{color3} "EX a b. ?P a b"})}\\
    3.88 +{\tiny\ttfamily\color{color6} 3~}~{\bfseries\color{color1} proof} cases\\
    3.89 +{\tiny\ttfamily\color{color6} 4~}~{\indent\bfseries\color{color1} assume} {\color{color3} "sqrt 2 powr sqrt 2 $\in$ $\mathbb{Q}$"}\\
    3.90 +{\tiny\ttfamily\color{color6} 5~}~{\indent\bfseries\color{color1} then have} {\color{color3} "?P (sqrt 2) (sqrt 2)"}\\
    3.91 +{\tiny\ttfamily\color{color6} 6~}~{\indent\indent\bfseries\color{color1} by} (metis sqrt\_2\_not\_rat)\\
    3.92 +{\tiny\ttfamily\color{color6} 7~}~{\indent\bfseries\color{color1} then show} {\color{color2} ?thesis} {\bfseries\color{color1} by} blast\\
    3.93 +{\tiny\ttfamily\color{color6} 8~}~{\bfseries\color{color1} next}\\
    3.94 +{\tiny\ttfamily\color{color6} 9~}~{\indent\bfseries\color{color1} assume} 1: {\color{color3} "sqrt 2 powr sqrt 2 $\notin$ $\mathbb{Q}$"}\\
    3.95 +{\tiny\ttfamily\color{color6} 10}~{\indent\bfseries\color{color1} have} {\color{color3} "(sqrt 2 powr sqrt 2) powr sqrt 2 = 2"}\\
    3.96 +{\tiny\ttfamily\color{color6} 11}~{\indent\indent\bfseries\color{color1} using} powr\_realpow [of \_ 2]\\
    3.97 +{\tiny\ttfamily\color{color6} 12}~{\indent\indent\bfseries\color{color1} by} (simp add: powr\_powr power2\_eq\_square [symmetric])\\
    3.98 +{\tiny\ttfamily\color{color6} 13}~{\indent\bfseries\color{color1} then have} {\color{color3} "?P (sqrt 2 powr sqrt 2) (sqrt 2)"}\\
    3.99 +{\tiny\ttfamily\color{color6} 14}~{\indent\indent{\bfseries\color{color1} by} (metis 1 Rats\_number\_of sqrt\_2\_not\_rat)}\\
   3.100 +{\tiny\ttfamily\color{color6} 15}~{\indent\bfseries\color{color1} then show} {\color{color2} ?thesis} {\bfseries\color{color1} by} blast\\
   3.101 +{\tiny\ttfamily\color{color6} 16}~{\bfseries\color{color1} qed}
   3.102 +\end{minipage}}}
   3.103 +\end{program}
   3.104 +
     4.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     4.2 +++ b/doc-isac/mlehnfeld/master/thesis/appendix_b.tex	Thu Jun 26 17:19:30 2014 +0200
     4.3 @@ -0,0 +1,20 @@
     4.4 +\chapter{Content of the DVD}
     4.5 +\label{app:cdrom}
     4.6 +
     4.7 +\paragraph{Format:} 
     4.8 +		DVD-R, Single Layer, ISO9660-Format
     4.9 +
    4.10 +\section{Master Thesis}
    4.11 +\begin{FileList}{/thesis/}
    4.12 +\fitem{Lehnfeld14.pdf} Master thesis
    4.13 +\fitem{images/*.png} Images
    4.14 +\fitem{literature/*.pdf} Copies of online/selected sources
    4.15 +\end{FileList}
    4.16 +
    4.17 +\section{Project Files}
    4.18 +\label{app:dvd-proj}
    4.19 +\begin{FileList}{/project/}
    4.20 +\fitem{demos/} Demo theories, performance test
    4.21 +\fitem{repo/} Repository snapshot (28/06/2014)
    4.22 +\fitem{README.md} Usage instructions
    4.23 +\end{FileList}
     5.1 --- a/doc-isac/mlehnfeld/master/thesis/conclusion.tex	Thu Jun 26 10:50:27 2014 +0200
     5.2 +++ b/doc-isac/mlehnfeld/master/thesis/conclusion.tex	Thu Jun 26 17:19:30 2014 +0200
     5.3 @@ -1,14 +1,58 @@
     5.4 -\chapter{Conclusion and Futurework}
     5.5 +\chapter{Conclusion}
     5.6  \label{cha:conclusion}
     5.7 +In this thesis we showed how an educational mathematics system called \isac{} was integrated with the theory data evaluation scheme managed by \isabelle, an interactive theorem proving system. This modification allowed for the straight forward introduction of a parallel execution mechanism called {\em futures}, which are data structures holding possibly unfinished computations and whose evaluation is efficiently managed by \isabelle{}'s run-time system. Before we went into the details of this process, a comprehensive theoretical basis was established. This included a discussion of the requirement for modern software to utilize the processing power available on shared memory multi-core architectures which have become a standard during the last decade. Also, various approaches towards the implementation and benefits of parallelism and concurrency in functional programming were explored, along with important aspects to consider in order to parallelize software efficiently. The architectures and technologies related to \isabelle{} and \isac{} were explained and we saw how both projects combined \textit{JVM}-based front-ends with logical engines developed in \textit{Standard ML} and followed different approaches for the communication between these layers. The systems' designs show how one can benefit from the advantages and strengths of different programming paradigms and platforms within a single architecture. The results of the case study are promising and show that depending on the problem at hand, the achievable speedup can be close to a factor of the number of available processing cores, potentially enhanced even more by the use of simultaneous multithreading (hyper-threading).
     5.8 +%NOTES: functional programming, higher-order -> compiler responsible for efficiency; a lot of research has gone into it.
     5.9  
    5.10 -\section{Project Outcomes}
    5.11 -text
    5.12  
    5.13 +\section{Future Work}
    5.14 +While \isac{}'s internal user session management has now been parallelized, multiple user GUIs communicate with the same standard input stream on the \textit{Standard ML} side and this communication is unsynchronized. The mathematic engine writes to one single standard output stream. The write operation in use is provided and synchronized by \isabelle. However, since all GUIs read from the same stream, they need to filter out all messages not meant for them and thus waste resources, especially with growing numbers of simultaneous user sessions. The next important step is therefore a more flexible, efficient and robust communication model which would then allow the parallel session management to be stressed and also fine-tuned to perform well in practice.
    5.15 +Other improvements to \isac{} include utilizing of \isabelle{}'s function package and code generator to simplify and speedup \isac{}'s programming language, adaptation of \isabelle{}'s capabilities with respect to floating point numbers and complex numbers, possibly adopt term nets for better performance of \isac{}'s rewrite engine.  The latter plans for improvement of the mathematics-engine can be pursued independently from work on the front-end due to a stable interface in between. While \isac{}'s own front-end will be extended and improved, future development of the \textit{Isabelle/jEdit} IDE may enable \isac{} to adopt it as its front-end in the long run.
    5.16  
    5.17 -\section{Conclusion}
    5.18 -text
    5.19 +The whole case study involved deeply invasive refactoring processes. The presence auf automatic tools for this kind of tasks in \textit{Standard ML} would have been desireable. While there has been related work on \textit{Haskell} \cite{brown2012paraforming,thompson2005refactoring,dig2011refactoring,li2006comparative} and \textit{Erlang} \cite{brown2013cost,li2006comparative}, there are, to my knowledge, no such projects specificly for \textit{Standard ML} available. These could also help remove unused code and support convenient restructuring of modules.
    5.20 +% TODO: isa_pide_educational.pdf
    5.21  
    5.22 -
    5.23 -\section{Futurework}
    5.24 -text
    5.25 -
    5.26 +%>>> Sind die Streams zwischen Mathematikengine und Frontend
    5.27 +%>>> synchronisiert? Das müsste ich dann nämlich auch noch machen. Oder
    5.28 +%>>> passieren sonst noch Dinge innerhalb von appendFormula, die
    5.29 +%>>> möglicherweise synchronisiert werden müssen?
    5.30 +%>>
    5.31 +%>> Diese Frage geht nun über die Programmiersprache (in [2] oben) weiter
    5.32 +%>> hinunter zum Betriebssystem: stdin/stdout Streams [3] sind da
    5.33 +%>> unglaublich robust: Isac funktioniert schon jetzt so, dass das Java ganz
    5.34 +%>> beliebig auf stdin schreibt, ohne sich irgendwie um stdout zu kümmern.
    5.35 +%>> Beobachtet man die Konsole, die stdin und stdout gemeinsam darstellt, so
    5.36 +%>> sieht das sehr verwirrend aus.
    5.37 +%>>
    5.38 +%> Isabelle hat in implementation.pdf die direkten, unsynchronisierten
    5.39 +%> Zugriffe auf die Standard Streams ausdrücklich verboten. Dabei geht es
    5.40 +%> nicht um eine Abhängigkeit zwischen stdin und stdout. Mehrere, parallele
    5.41 +%> Schreiboperationen auf stdout können beispielsweise zu fehlerhaften
    5.42 +%> Ausgaben führen.
    5.43 +%>> printf("hello\n");
    5.44 +%> und
    5.45 +%>> printf("world\n");
    5.46 +%> ergeben zB.
    5.47 +%>> hewolrlo
    5.48 +%>> ld
    5.49 +%> Ich weiß nicht, ob dieses Beispiel realistisch ist. Die Problematik
    5.50 +%> konnte ich aber nicht anders ausdrücken.
    5.51 +%>
    5.52 +%Damit hast Du die Problematik perfekt ausgedrückt, und das Beispiel ist
    5.53 +%sehr realistisch: die verschiedenen Threads mit den jeweiligen Aufrufen
    5.54 +%der math-engine kommen nach einer beliebigen Zeitspanne zurück (0.005 sec
    5.55 +%– 5.00 sec in verfügbaren Beispielen) --- und dann steht nur 1 stdout für
    5.56 +%alle Threads zur Verfügung.
    5.57 +%Mit dem einzigen stdout bleibt ein wesentlicher Teil des derzeitigen
    5.58 +%Flaschenhalses in Isac’s Interaktions-Kette nach Abschluss Deiner Arbeit
    5.59 +%bestehen. Damit jeder Thread seinen eigenen stdin/out bekommt, sind
    5.60 +%symmetrische Änderungen auf der Java-Seite notwendig --- eine schöne
    5.61 +%Arbeit größeren Ausmaßes für FH-Experten nach Dir !
    5.62 +%Damit Deine Arbeit Isac weiterhin lauffähig erhält, ist diese Aufgabe zu
    5.63 +%lösen:
    5.64 +%************ einen Thread nur dann auf den stdout nur schreiben lassen,
    5.65 +%                       wenn kein anderer Thread gerade schreibt    
    5.66 +%*********************
    5.67 +%Da kann ich mir im Prinzip eine Reihe von Möglichkeiten vorstellen und wir
    5.68 +%sollten sorgfältig beraten, welche der Möglichkeiten für Dich am
    5.69 +%einfachsten ist und auch möglichst kompatibel mit dem nächsten
    5.70 +%Entwicklungsschritt.
     6.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     6.2 +++ b/doc-isac/mlehnfeld/master/thesis/data/thinkpad.dat	Thu Jun 26 17:19:30 2014 +0200
     6.3 @@ -0,0 +1,301 @@
     6.4 +n t
     6.5 +1 1668
     6.6 +1 1530
     6.7 +1 1608
     6.8 +1 1538
     6.9 +1 1554
    6.10 +1 1527
    6.11 +1 1541
    6.12 +1 1589
    6.13 +1 1561
    6.14 +1 1621
    6.15 +1 1571
    6.16 +1 1546
    6.17 +1 1571
    6.18 +1 1604
    6.19 +1 1524
    6.20 +1 1535
    6.21 +1 1619
    6.22 +1 1536
    6.23 +1 1530
    6.24 +1 1552
    6.25 +2 1676
    6.26 +2 1667
    6.27 +2 1672
    6.28 +2 1674
    6.29 +2 1671
    6.30 +2 1884
    6.31 +2 1697
    6.32 +2 1646
    6.33 +2 1597
    6.34 +2 1601
    6.35 +2 1710
    6.36 +2 1622
    6.37 +2 1653
    6.38 +2 1846
    6.39 +2 1636
    6.40 +2 1616
    6.41 +2 1636
    6.42 +2 1649
    6.43 +2 1713
    6.44 +2 1625
    6.45 +3 2194
    6.46 +3 2171
    6.47 +3 2138
    6.48 +3 2232
    6.49 +3 2161
    6.50 +3 2198
    6.51 +3 2196
    6.52 +3 2139
    6.53 +3 2265
    6.54 +3 2092
    6.55 +3 2067
    6.56 +3 2269
    6.57 +3 2258
    6.58 +3 2196
    6.59 +3 2246
    6.60 +3 2171
    6.61 +3 2225
    6.62 +3 2258
    6.63 +3 2065
    6.64 +3 2229
    6.65 +4 2314
    6.66 +4 2322
    6.67 +4 2333
    6.68 +4 2349
    6.69 +4 2365
    6.70 +4 2329
    6.71 +4 2358
    6.72 +4 2280
    6.73 +4 2294
    6.74 +4 2273
    6.75 +4 2308
    6.76 +4 2374
    6.77 +4 2367
    6.78 +4 2332
    6.79 +4 2327
    6.80 +4 2272
    6.81 +4 2319
    6.82 +4 2313
    6.83 +4 2291
    6.84 +4 2349
    6.85 +5 3833
    6.86 +5 3746
    6.87 +5 3844
    6.88 +5 3760
    6.89 +5 3846
    6.90 +5 3790
    6.91 +5 3797
    6.92 +5 3832
    6.93 +5 3794
    6.94 +5 3813
    6.95 +5 3864
    6.96 +5 3767
    6.97 +5 3855
    6.98 +5 3873
    6.99 +5 3808
   6.100 +5 3878
   6.101 +5 3815
   6.102 +5 3819
   6.103 +5 3809
   6.104 +5 3820
   6.105 +6 3988
   6.106 +6 4410
   6.107 +6 3961
   6.108 +6 3998
   6.109 +6 3935
   6.110 +6 3995
   6.111 +6 4020
   6.112 +6 3981
   6.113 +6 3961
   6.114 +6 4430
   6.115 +6 4052
   6.116 +6 4028
   6.117 +6 3938
   6.118 +6 3956
   6.119 +6 3980
   6.120 +6 4425
   6.121 +6 4004
   6.122 +6 3983
   6.123 +6 4086
   6.124 +6 3935
   6.125 +7 4510
   6.126 +7 4530
   6.127 +7 4417
   6.128 +7 4430
   6.129 +7 4417
   6.130 +7 4499
   6.131 +7 4440
   6.132 +7 4532
   6.133 +7 4399
   6.134 +7 4447
   6.135 +7 4478
   6.136 +7 4377
   6.137 +7 4421
   6.138 +7 4429
   6.139 +7 4457
   6.140 +7 4510
   6.141 +7 4523
   6.142 +7 4364
   6.143 +7 4428
   6.144 +7 4467
   6.145 +8 4700
   6.146 +8 4728
   6.147 +8 4692
   6.148 +8 4667
   6.149 +8 4815
   6.150 +8 4656
   6.151 +8 4806
   6.152 +8 4719
   6.153 +8 4815
   6.154 +8 4702
   6.155 +8 4735
   6.156 +8 4865
   6.157 +8 4852
   6.158 +8 4733
   6.159 +8 4790
   6.160 +8 4730
   6.161 +8 4757
   6.162 +8 4741
   6.163 +8 4834
   6.164 +8 4703
   6.165 +9 6288
   6.166 +9 6179
   6.167 +9 6201
   6.168 +9 6204
   6.169 +9 6219
   6.170 +9 6232
   6.171 +9 6314
   6.172 +9 6277
   6.173 +9 6198
   6.174 +9 6224
   6.175 +9 6275
   6.176 +9 6191
   6.177 +9 6286
   6.178 +9 6287
   6.179 +9 6260
   6.180 +9 6217
   6.181 +9 6213
   6.182 +9 6377
   6.183 +9 6319
   6.184 +9 6400
   6.185 +10 6198
   6.186 +10 6202
   6.187 +10 6489
   6.188 +10 6449
   6.189 +10 6213
   6.190 +10 6264
   6.191 +10 6224
   6.192 +10 6310
   6.193 +10 6245
   6.194 +10 6278
   6.195 +10 6290
   6.196 +10 6171
   6.197 +10 6178
   6.198 +10 6261
   6.199 +10 6364
   6.200 +10 6204
   6.201 +10 6245
   6.202 +10 6364
   6.203 +10 6218
   6.204 +10 6264
   6.205 +11 6744
   6.206 +11 6795
   6.207 +11 6723
   6.208 +11 6708
   6.209 +11 6768
   6.210 +11 6685
   6.211 +11 6682
   6.212 +11 6720
   6.213 +11 6699
   6.214 +11 6719
   6.215 +11 6666
   6.216 +11 6748
   6.217 +11 6744
   6.218 +11 6773
   6.219 +11 6698
   6.220 +11 6745
   6.221 +11 6697
   6.222 +11 6713
   6.223 +11 6745
   6.224 +11 6794
   6.225 +12 6990
   6.226 +12 7096
   6.227 +12 7040
   6.228 +12 7067
   6.229 +12 7060
   6.230 +12 7159
   6.231 +12 7113
   6.232 +12 7207
   6.233 +12 7010
   6.234 +12 7134
   6.235 +12 7332
   6.236 +12 7438
   6.237 +12 7458
   6.238 +12 7421
   6.239 +12 7529
   6.240 +12 7442
   6.241 +12 7388
   6.242 +12 7459
   6.243 +12 7559
   6.244 +12 7518
   6.245 +13 8387
   6.246 +13 8442
   6.247 +13 8397
   6.248 +13 8418
   6.249 +13 8360
   6.250 +13 8357
   6.251 +13 8376
   6.252 +13 8430
   6.253 +13 8357
   6.254 +13 8372
   6.255 +13 8417
   6.256 +13 8470
   6.257 +13 8382
   6.258 +13 8504
   6.259 +13 8384
   6.260 +13 8454
   6.261 +13 8488
   6.262 +13 8411
   6.263 +13 8494
   6.264 +13 8439
   6.265 +14 8697
   6.266 +14 8620
   6.267 +14 8850
   6.268 +14 8658
   6.269 +14 8633
   6.270 +14 8720
   6.271 +14 8588
   6.272 +14 9298
   6.273 +14 8656
   6.274 +14 8759
   6.275 +14 8658
   6.276 +14 8630
   6.277 +14 8626
   6.278 +14 8727
   6.279 +14 8737
   6.280 +14 8689
   6.281 +14 8655
   6.282 +14 8685
   6.283 +14 8690
   6.284 +14 8818
   6.285 +15 9153
   6.286 +15 9110
   6.287 +15 9011
   6.288 +15 9029
   6.289 +15 9003
   6.290 +15 9061
   6.291 +15 9091
   6.292 +15 9016
   6.293 +15 9035
   6.294 +15 9067
   6.295 +15 9004
   6.296 +15 8980
   6.297 +15 9130
   6.298 +15 8937
   6.299 +15 9179
   6.300 +15 9087
   6.301 +15 8979
   6.302 +15 9083
   6.303 +15 9077
   6.304 +15 9120
     7.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     7.2 +++ b/doc-isac/mlehnfeld/master/thesis/data/thinkpad_seq.dat	Thu Jun 26 17:19:30 2014 +0200
     7.3 @@ -0,0 +1,301 @@
     7.4 +n t
     7.5 +1 1580
     7.6 +1 1514
     7.7 +1 1518
     7.8 +1 1552
     7.9 +1 1518
    7.10 +1 1514
    7.11 +1 1563
    7.12 +1 1547
    7.13 +1 1507
    7.14 +1 1515
    7.15 +1 1513
    7.16 +1 1535
    7.17 +1 1507
    7.18 +1 1512
    7.19 +1 1522
    7.20 +1 1534
    7.21 +1 1505
    7.22 +1 1505
    7.23 +1 1511
    7.24 +1 1525
    7.25 +2 3007
    7.26 +2 3052
    7.27 +2 3006
    7.28 +2 3033
    7.29 +2 3040
    7.30 +2 3034
    7.31 +2 3009
    7.32 +2 3038
    7.33 +2 3048
    7.34 +2 3027
    7.35 +2 3228
    7.36 +2 3031
    7.37 +2 3046
    7.38 +2 3030
    7.39 +2 3053
    7.40 +2 3103
    7.41 +2 3028
    7.42 +2 3056
    7.43 +2 3005
    7.44 +2 3019
    7.45 +3 4587
    7.46 +3 4530
    7.47 +3 4528
    7.48 +3 4537
    7.49 +3 4531
    7.50 +3 4545
    7.51 +3 4556
    7.52 +3 4555
    7.53 +3 4603
    7.54 +3 4522
    7.55 +3 4546
    7.56 +3 4597
    7.57 +3 4536
    7.58 +3 4530
    7.59 +3 4542
    7.60 +3 4563
    7.61 +3 4542
    7.62 +3 4545
    7.63 +3 4533
    7.64 +3 4525
    7.65 +4 6164
    7.66 +4 6099
    7.67 +4 6151
    7.68 +4 6193
    7.69 +4 6040
    7.70 +4 6029
    7.71 +4 6032
    7.72 +4 6025
    7.73 +4 6056
    7.74 +4 6042
    7.75 +4 6024
    7.76 +4 6060
    7.77 +4 6054
    7.78 +4 6049
    7.79 +4 6189
    7.80 +4 6167
    7.81 +4 6096
    7.82 +4 6162
    7.83 +4 6098
    7.84 +4 6070
    7.85 +5 7551
    7.86 +5 7738
    7.87 +5 7629
    7.88 +5 7597
    7.89 +5 7616
    7.90 +5 7577
    7.91 +5 7601
    7.92 +5 7665
    7.93 +5 7651
    7.94 +5 7585
    7.95 +5 7560
    7.96 +5 7593
    7.97 +5 7589
    7.98 +5 7584
    7.99 +5 7602
   7.100 +5 7589
   7.101 +5 7576
   7.102 +5 7598
   7.103 +5 7597
   7.104 +5 7659
   7.105 +6 9204
   7.106 +6 9066
   7.107 +6 9081
   7.108 +6 9064
   7.109 +6 9099
   7.110 +6 9247
   7.111 +6 9113
   7.112 +6 9265
   7.113 +6 9262
   7.114 +6 9174
   7.115 +6 9111
   7.116 +6 9120
   7.117 +6 9238
   7.118 +6 9265
   7.119 +6 9232
   7.120 +6 9110
   7.121 +6 9140
   7.122 +6 9175
   7.123 +6 9129
   7.124 +6 9140
   7.125 +7 10633
   7.126 +7 10572
   7.127 +7 10585
   7.128 +7 10642
   7.129 +7 10687
   7.130 +7 10695
   7.131 +7 10736
   7.132 +7 10725
   7.133 +7 10585
   7.134 +7 10627
   7.135 +7 10612
   7.136 +7 10820
   7.137 +7 10677
   7.138 +7 10696
   7.139 +7 10695
   7.140 +7 10625
   7.141 +7 10885
   7.142 +7 10720
   7.143 +7 10699
   7.144 +7 10680
   7.145 +8 12196
   7.146 +8 12261
   7.147 +8 12225
   7.148 +8 12228
   7.149 +8 12282
   7.150 +8 12309
   7.151 +8 12281
   7.152 +8 12462
   7.153 +8 12274
   7.154 +8 12299
   7.155 +8 12209
   7.156 +8 12195
   7.157 +8 12325
   7.158 +8 12305
   7.159 +8 12307
   7.160 +8 12333
   7.161 +8 12164
   7.162 +8 12184
   7.163 +8 12189
   7.164 +8 12248
   7.165 +9 13913
   7.166 +9 14385
   7.167 +9 14175
   7.168 +9 14310
   7.169 +9 14071
   7.170 +9 13850
   7.171 +9 13957
   7.172 +9 13854
   7.173 +9 13797
   7.174 +9 13923
   7.175 +9 14119
   7.176 +9 14089
   7.177 +9 14020
   7.178 +9 14019
   7.179 +9 14313
   7.180 +9 14479
   7.181 +9 14372
   7.182 +9 14175
   7.183 +9 14088
   7.184 +9 14013
   7.185 +10 15630
   7.186 +10 15631
   7.187 +10 15765
   7.188 +10 15647
   7.189 +10 15889
   7.190 +10 15820
   7.191 +10 15902
   7.192 +10 15732
   7.193 +10 15823
   7.194 +10 15625
   7.195 +10 15655
   7.196 +10 15658
   7.197 +10 15516
   7.198 +10 15628
   7.199 +10 15673
   7.200 +10 15842
   7.201 +10 15774
   7.202 +10 15944
   7.203 +10 15974
   7.204 +10 15987
   7.205 +11 16682
   7.206 +11 16850
   7.207 +11 16637
   7.208 +11 16651
   7.209 +11 16720
   7.210 +11 16673
   7.211 +11 16760
   7.212 +11 16820
   7.213 +11 16534
   7.214 +11 16674
   7.215 +11 16784
   7.216 +11 16829
   7.217 +11 16748
   7.218 +11 16615
   7.219 +11 16634
   7.220 +11 16612
   7.221 +11 16804
   7.222 +11 17021
   7.223 +11 16812
   7.224 +11 16725
   7.225 +12 18222
   7.226 +12 18344
   7.227 +12 18250
   7.228 +12 18351
   7.229 +12 18341
   7.230 +12 18312
   7.231 +12 18318
   7.232 +12 18314
   7.233 +12 18313
   7.234 +12 18577
   7.235 +12 18323
   7.236 +12 18261
   7.237 +12 18485
   7.238 +12 18643
   7.239 +12 18291
   7.240 +12 18098
   7.241 +12 18146
   7.242 +12 18250
   7.243 +12 18326
   7.244 +12 18314
   7.245 +13 19731
   7.246 +13 19936
   7.247 +13 19692
   7.248 +13 20068
   7.249 +13 19892
   7.250 +13 19915
   7.251 +13 19834
   7.252 +13 19814
   7.253 +13 20122
   7.254 +13 19894
   7.255 +13 20114
   7.256 +13 19886
   7.257 +13 19802
   7.258 +13 20132
   7.259 +13 20002
   7.260 +13 19924
   7.261 +13 19917
   7.262 +13 20294
   7.263 +13 19996
   7.264 +13 20016
   7.265 +14 21228
   7.266 +14 21299
   7.267 +14 21118
   7.268 +14 21057
   7.269 +14 21136
   7.270 +14 21086
   7.271 +14 21094
   7.272 +14 21123
   7.273 +14 21198
   7.274 +14 21297
   7.275 +14 21188
   7.276 +14 21248
   7.277 +14 21381
   7.278 +14 21255
   7.279 +14 21561
   7.280 +14 21221
   7.281 +14 21217
   7.282 +14 21451
   7.283 +14 21238
   7.284 +14 21234
   7.285 +15 22912
   7.286 +15 22790
   7.287 +15 23040
   7.288 +15 22860
   7.289 +15 22735
   7.290 +15 22760
   7.291 +15 23151
   7.292 +15 22538
   7.293 +15 22484
   7.294 +15 22558
   7.295 +15 22762
   7.296 +15 22587
   7.297 +15 22716
   7.298 +15 22668
   7.299 +15 23003
   7.300 +15 22891
   7.301 +15 23368
   7.302 +15 23385
   7.303 +15 23094
   7.304 +15 23251
     8.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     8.2 +++ b/doc-isac/mlehnfeld/master/thesis/data/thinkpadavg.dat	Thu Jun 26 17:19:30 2014 +0200
     8.3 @@ -0,0 +1,16 @@
     8.4 +n t
     8.5 +1 1566
     8.6 +2 1675
     8.7 +3 2188
     8.8 +4 2323
     8.9 +5 3818
    8.10 +6 4053
    8.11 +7 4454
    8.12 +8 4752
    8.13 +9 6258
    8.14 +10 6272
    8.15 +11 6728
    8.16 +12 7271
    8.17 +13 8417
    8.18 +14 8720
    8.19 +15 9058
     9.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     9.2 +++ b/doc-isac/mlehnfeld/master/thesis/data/thinkpadavg_seq.dat	Thu Jun 26 17:19:30 2014 +0200
     9.3 @@ -0,0 +1,16 @@
     9.4 +n t
     9.5 +1 1525
     9.6 +2 3045
     9.7 +3 4548
     9.8 +4 6090
     9.9 +5 7608
    9.10 +6 9162
    9.11 +7 10680
    9.12 +8 12264
    9.13 +9 14096
    9.14 +10 15756
    9.15 +11 16729
    9.16 +12 18324
    9.17 +13 19949
    9.18 +14 21232
    9.19 +15 22878
    10.1 --- a/doc-isac/mlehnfeld/master/thesis/fundamentals.tex	Thu Jun 26 10:50:27 2014 +0200
    10.2 +++ b/doc-isac/mlehnfeld/master/thesis/fundamentals.tex	Thu Jun 26 17:19:30 2014 +0200
    10.3 @@ -1,54 +1,225 @@
    10.4  \chapter{Fundamentals}
    10.5  \label{cha:fundamentals}
    10.6 +In this chapter we want to get an overview of fundamental concepts relevant for the practical project work carried out along with and as a basis for this thesis. We will also consider ideas and technologies that are related to the topic of the thesis and could be utilized for similar practical applications. Section \ref{sec:funprog} explores the history and common features of functional programming. Computer theorem proving will be introduced in section \ref{sec:ctp}, followed by \ref{sec:parallelism} on a theoretical background on parallelism. Sections \ref{sec:concurresp} and \ref{sec:refacfunprogs} explain concurrency in relation to latency and responsiveness and refactoring of programs developed in functional programming languages, respectively.
    10.7 +%EXTEND: par_func_intro.pdf
    10.8  
    10.9  \section{Functional Programming}
   10.10  \label{sec:funprog}
   10.11 -about 2 pages\\
   10.12 -NOTES:\\
   10.13 -declarative programming, mathematical notation: close to formal specification -> prototyping\\
   10.14 -clean\_parallel\_graph\_rewriting.pdf: lambda calculus vs. graph rewriting systems.\\
   10.15 -impure vs. pure\\
   10.16 -monads vs. linear type systems\\
   10.17 -Hindley–Milner type inference
   10.18 +Functional programming is a declarative way of programming, i.e. source code does not describe routines but rather the desired result of a computation. Computation is the evaluation of expressions. Even programs are in fact functions; hence the name {\em functional programming}. Pure functional programming languages do not permit mutable data which means that there are no variables, only constant values. Functional programs are side effect free, i.e. given the same input parameters they must always produce the same results. Unlike subroutines in imperative languages, here the term {\em function} refers to a function in the mathematical sense. Therefore the implementation of programs tends to be closer to their specification compared to other paradigms. Although functional programming languages are rarely utilized for the development of commercial software \cite{hammond2000research}, they have had a significant impact on the theory and pragmatics of other paradigms and programming languages in general \cite{hudak1989conception}.
   10.19  
   10.20  \subsection{History}
   10.21 -text
   10.22 +\label{sec:funhist}
   10.23 +The history of functional programming dates back to a time before computers existed, when Alonzo Church published his work on the lambda calculus in 1932 \cite{church1932set}. This formal system was the first methodical approach towards describing a computational perspective on mathematical functions. One of its most notable properties is function self-application, which gives the lambda calculus its power whilst maintaining its consistency as a mathematical system. One of the first languages which contained elements of what is now known as functional programming is \textit{Lisp}. Its initial language design, established in 1958, was only slightly influenced by the lambda calculus, but later dialects have encouraged a more pure, side effect free programming style \cite{mccarthy1978history}. The original purpose for the design of \textit{Lisp} was a language for {\bf lis}t {\bf p}rocessing to be utilized in artificial intelligence research. A notable feature of \textit{Lisp} are higher-order functions which can be found in most contemporary functional languages and which enable list iterations without the need for explicit loops.
   10.24 +
   10.25 +In 1978 John Backus, who had been heavily involved in the design of \textit{Fortran} and \textit{ALGOL}, held a Turing award lecture which emphasized certain benefits of functional style languages over imperative programming. The document which resulted from the lecture \cite{backus1978can} strongly influenced and fueled research in the area of functional programming. Also in the mid 1970s, Robin Milner and others from the \textit{University of Edinburgh} designed the \textit{ML} {\em metalanguage} as a command language for the theorem prover {\em LCF} \cite{milner1978theory} (see section \ref{sec:ctp}). They continued the development of \textit{ML} and turned it into a stand-alone programming language. Even though it has impure features such as mutable references and an I/O system which is not side effect free, \textit{ML} is known as a functional programming language because it encourages a functional programming style. \textit{ML}'s type system employs the {\em Hindley–Milner type inference algorithm} (see section \ref{sec:hindleymilner}) which reduces the number of required type annotations because it can infer the types of expressions in most cases. Later, a standard for the language was established \cite{milner1997definition}, which resulted in the programming language \textit{Standard ML}.
   10.26 +
   10.27 +As a dozen non-strict languages (i.e. employing lazy evaluation) emerged in the 1980s, it was decided at a conference in 1987 that the parallel efforts should be merged into one common language as a basis for an exchange of ideas. This decision resulted in the formation of a commitee which created {\em Haskell} \cite{jones2003haskell}. Research around this purely functional, general-purpose programming language has produced many innovations in the area of functional programming during the last two decades.
   10.28  
   10.29  \subsection{Common Features}
   10.30 -text
   10.31 +There are certain distinguishing features which can be found in most contemporary functional programming languages. This section outlines the most prevalent ones (e.g., see \cite{hudak1989conception, haskell2013functional}).
   10.32 +
   10.33 +\subsubsection{Referential Transparency}
   10.34 +The heavy use of immutable data has already been mentioned in section \ref{sec:funprog}. A consequence of pure functions is {\em referential transparency}, which is the property of computations which always yield equal results whenever they are invoked with the same input parameters. Later in this section we will see how pure languages enable e.g. input, output and exception handling whilst preserving purity.
   10.35 +
   10.36 +\subsubsection{Recursion}
   10.37 +Recursion is not a technique which is specific to functional programming. However, since loops require the use of mutable values (counter variables), recursion is used way more excessively than in imperative languages. Often, it is the only way to describe iterations. As recursion can easily cause an overflow of the call stack, most implementations of functional languages employ {\em tail call optimization}. This mechanism discards a function call's environment and thereby aviods creating a new stack frame for the recursive call and saves memory. This only works for true tail recursion, i.e. if the result of the recursive call is the result of the function itself.
   10.38 +
   10.39 +\subsubsection{Higher-Order Functions}
   10.40 +Higher-order functions (HOFs) are functions which expect other functions as their input parameters or have functions as their return values. Examples for frequently used HOFs are the {\em map} and {\em fold} operations, which apply other functions to lists of values as in the \textit{Standard ML} expression
   10.41 +\imlcode{~~val ls = map (fn x => x + 2) [1, 2, 5]}
   10.42 +\noindent
   10.43 +which would result in the list \texttt{[3, 4, 7]}. HOFs generally require languages to treat functions as first-class citizens, i.e. simple values. Although HOFs are a central aspect of functional programming, imperative languages may facilitate the use of functions as arguments of other functions. E.g. \textit{C} allows the use of function pointers as first-class citizens. As we will see later, e.g. in section \ref{sec:futurespro}, HOFs can be beneficial for the expression of concurrency.
   10.44 +
   10.45 +\subsubsection{Purity and Effects}
   10.46 +\label{sec:pureeffects}
   10.47 +Many functional programming languages such as \textit{Standard ML} encourage a purely funcional coding style. However, certain functionalities like memory I/O require developers to write impure code. Purely functional programming languages like \textit{Haskell} and \textit{Clean} do not permit any side effects and therefore use other mechanisms to accomplish the desired effects. The \textit{Haskell} developers decided to implement a language construct called {\em monad} \cite{jones2003haskell}. Monads can be described as representations of possibly impure computations. In order to preserve purity, functions which perform e.g. I/O operations have to accept monads as arguments or return them. Similarly, \textit{Clean} uses {\em uniqueness types} \cite{achten1995ins} which ensure that every function's type makes transparent to the environment, which mutable aspects of the environment it manipulates. This is accomplished by allowing these functions exactly one access to the denoted mutable data which they receive as arguments and whose manipulated versions are contained in their return values.
   10.48 +
   10.49 +\subsubsection{Partial Function Evaluation}
   10.50 +Partial function evaluation facilitates applying less arguments to a function than it requires which results in another function that only requires the remaining arguments. E.g. the function \texttt{fun add x y = x + y} accepts two arguments. Its partial evaluation \texttt{val add2 = add 2} results in a function which only expects one argument and adds 2 to it.
   10.51 +
   10.52 +\subsubsection{Pattern Matching}
   10.53 +\label{sec:patmatch}
   10.54 +Pattern matching allows for data to be processed on the basis of its structure. E.g. in \textit{Standard ML}, functions are usually expressed with exhaustive pattern definitions. The classic definition of the Fibonacci numbers\footnote{see comments in appendix \ref{sec:fib-imp}} would be
   10.55 +\imlcode{\label{alg:patmatch}
   10.56 +~~\=fu\=n fib 0 = 0\\
   10.57 +\>\>| fib 1 = 1\\
   10.58 +\>\>| fib x = fib (x - 1) + fib (x - 2); \textrm{.}}
   10.59 +
   10.60 +
   10.61 +\subsubsection{Lazy Evaluation}
   10.62 +Due to referencial transparency, the order in which expressions are evaluated does not affect the results. Therefore, some languages like \textit{Haskell} defer the computation of expressions until they are actually needed. This avoids unnecessary calculations and makes infinite data structures possible. On the other hand, lazy evaluation can potentially cause space leaks \cite{launchbury1993natural} and it makes formal reasoning about code hard \cite{lippmeier2009type}. The opposite is called {\em eager} or {\em strict} evaluation which is the standard for most imperative languages and e.g. \textit{Standard ML}.
   10.63  
   10.64  
   10.65  \section{Computer Theorem Proving}
   10.66  \label{sec:ctp}
   10.67 -about 1 page
   10.68 +Computer Theorem Proving is related to functional programming in serveral fundamental ways. My practical work for this thesis concerned TP ({\bf T}heorem {\bf P}roving) on a technical level below the logical level of TP, on the level of parallel execution on multi-core hardware. However, it is the logical level of TP which provides the fundamental relations to functional programming.
   10.69 +
   10.70 +\subsection{History}
   10.71 +\label{ssec:ctp-hist}
   10.72 +In order to understand TP, we start with the history of TP from the very beginning.
   10.73 +
   10.74 +\paragraph{AI and \textit{Lisp}.}
   10.75 +Mechanical or ``formal'' reasoning was an original interest of Artificial Intelligence (AI). The first important programming languages in AI were \textit{Prolog} (logic programming) and \textit{Lisp}. The latter was briefly introduced as a functional language in section \ref{sec:funhist}.
   10.76 +
   10.77 +\paragraph{\textit{Automath}.}
   10.78 +A first notable success was the mechanisation of Edmund Landau's {\em Foundations of Analysis} \cite{van1979checking} in {\em Automath} in 1969. {\em Automath} is outdated, but some of its achievements are still used in TP, e.g. de-Brujin indices, the lambda calculus and Curry-Howard correspondence. This early succcess in TP was not acknowledged by mainstream mathematics. Rather, TP was mixed up with predictions of early AI, e.g. Herbert A. Simon's claim that ``{\em [m]achines will be capable, within twenty years, of doing any work a man can do.}'' \cite{simon1965shape}; predictions that soon turned out to be unrealistic. Nevertheless, mechanisation of various logics (sequent calculus, temporal logics, communicating sequential processes (see section \ref{sec:csp}), etc.) was continued and led to many different, unrelated software tools.
   10.79 +
   10.80 +\paragraph{\textit{LCF}.}
   10.81 +\label{sssec:lcf}
   10.82 +\textit{LCF} (Logic for Computable Functions) \cite{milner1978theory} denotes an interactive, automated theorem prover developed at the universities of Edinburgh and Stanford in the early 1970s. The \textit{LCF} project is relevant for this thesis, because it introduced the \textit{ML} programming language explored in section \ref{sec:isasml} and was used for the practical work within the thesis. The purpose of \textit{LCF} was to allow users to write theorem proving tactics and to encapsulate theorems as abstract da\-ta\-types, such that only the admitted inference rules can establish a theorem. This is still called {\em LCF princple}. \textit{LCF} became the ancestor of the {\em HOL family} of provers introduced below.
   10.83 +
   10.84 +\paragraph{\textit{The Next Seven Hundred Theorem Provers}.}
   10.85 +\label{par:next-700}
   10.86 +As mentioned above, although an academic niche, TP developed many different and unrelated software tools in the 1970s and 1980s. So in the 1980s the situation with TP was similar to the situation with programming languages in the 1960s. In 1965 Peter Landin had published a talk and a paper titled {\em The next 700 programming languages} \cite{landin1966next}. This paper influenced the convergence of programming languages and led to focused efforts on a few leading products. When Larry Paulson started the development of a generic logical framework with the purpose of allowing various logics to be implemented within this framework, he reused the analogy and published a paper titled {\em The Next Seven Hundred Theorem Provers} \cite{paulson1988isabelle}. This paper addressed the general issues involved in the design of a TP, the many technicalities to overcome until a system becomes usable and warns to produce provers for one-time usage: ``{\em Programs like {\em Isabelle} are not products: when they have served their purpose, they are discarded.}'' \cite{paulson1988isabelle}. In fact, the resulting logical framework, \isabelle{}, was able to attract efforts to implement new logics \cite{tum2013isadoc} within \isabelle{} and thus focused efforts, avoiding development of further TPs from scratch.
   10.87 +
   10.88 +\paragraph{The HOL family of provers.}
   10.89 +\label{sssec:holfam}
   10.90 +This family of provers is a descendant of \textit{LCF} introduced above and models higher-order logic, a specific logic which does not have sets (as preferred by mathematics) at the axiomatic basis, but functions. Thus this family shows great affinity to functional programming. The ancestor of this family is {\em HOL} \cite{gordon1993introduction}. \isabelle{} \cite{nipkow2002isabelle} abstracts from higher-order logics to natural deduction and thus allows for implementations of various logics within one and the same system as mentioned above. {\em HOL Light} \cite{harrison2014hol} uses a much simpler logical core and has little legacy code, giving the system a simple and uncluttered feel for programmers. \isabelle{} is also a good example demonstrating that the traditional distinction between automated theorem proving (ATP) and interactive theorem proving (ITP) is obsolete today. \isabelle{} and other interactive proof assistants include various automated provers in order to assist interactive construction of proofs. \isabelle{}'s collection of automated theorem provers is called {\em Sledgehammer} (see section \ref{sec:isaatp}).
   10.91 +
   10.92 +\paragraph{Breakthrough in mathematics: the four color theorem.}
   10.93 +The four color theorem was stated more than a hundred years ago, but it turned out to be hard to prove. In 1976 the theorem was tackled by Appel and Haken by the use of computer software \cite{appel1977solution}. However, they could not prove their program correct and thus their work was not acknowledged as a mathematical proof. In 2005, Georges Gonthier \cite{gonthier2008formal} used TP for proving the theorem and since that time TP was accepted as an indispensable tool for large and complicated proofs. Gonthier used the TP {\em Coq} \cite{chlipala2011certified}, which works within the theory of the calculus of inductive constructions, a derivative of the calculus of constructions. Another famous theorem is the Kepler conjecture which took even longer to be proved. In 1998, Thomas Hales presented a proof of 250 pages of notes and 3GB of computer programs, data and results. In 2003, after four years of work, a prominent team of referees reported that they were ``99\% certain'' of the correctness of the proof \cite{hales2011historical}. Since a mathematical proof is either correct or not, Hales started the {\em Flyspeck} project \cite{google2012flyspeck} in order to employ TP for a complete and trustable proof. The mentioned webpage hints at the important role of \isabelle{} in the project.
   10.94 +
   10.95 +\paragraph{Continuous penetration of computer science.}
   10.96 +\label{par:invade-cs}
   10.97 +In computer science there are no spectacular breakthroughs like in mathematics. But there was much work on formalizing foundations from the very beginning. In the early days of computer science, Austria had a world-renowned group gathered by Heinz Zemanek \cite{austriaforum2014zemanek}. At \textit{Vienna University of Technology}, Zemanek had built one of the first computers only with transistors, the {\em Mailüfterl} in 1955. And IBM, the dominating company at that time, hired Zemanek to implement IBM's programming language number one, \textit{PL1}. \textit{PL1} was a monstrosity born by an IBM management decision to join \textit{Fortran} with \textit{COBOL} using insights from \textit{ALGOL}. The overwhelming difficulties to implement \textit{PL1} were solved by what later became known as the {\em Vienna Development Method} (VDM) \cite{fitzgerald2008vienna}, an ancestor of what is now called {\em formal methods}.
   10.98 +
   10.99 +Formal methods, as the name indicates, formalizes a domain with mathematical notions. In parallel, computer science clarified the logical foundations of programming (Hoare, Dijkstra and others), such that programs could be proved correct with respect to a formal specification, i.e. proofs rely on specifications created by formal methods. Ever since the {\em Pentium bugs} \cite{sharangpani1994statistical,intel1997intel} swallowed several millions of dollars, Intel has invested in formal methods. For instance, Intel supports the TP {\em HOL Light} mentioned above. As there are many safety critical applications in medicine, air traffic control, etc. and technical systems become more complex, the demand for formal methods is increasing. This includes TP.
  10.100 +
  10.101 +\subsection{Relevance}
  10.102 +
  10.103 +\paragraph{The executable fragment of HOL and code generation.}
  10.104 +As already mentioned, higher-order logic is closely related to functional programming. This relation includes an executable fragment, i.e. equational theorems within HOL, which can model programs and which can be evaluated within the logic. This way of embedding programs in the logical environment provides the best prerequisites for proving properties of programs and for software verification. \isabelle{} recentely implemented a {\em function package}. Execution, or more appropriately, evaluation, within a prover is very inefficient in comparison with state-of-the-art production software. Therefore engineering software cannot be built within a prover. However, it turned out that algorithms described within the executable fragment of HOL are a concise source for automatic generation of efficient code in a surprisingly straight forward manner.
  10.105 +
  10.106 +\paragraph{Functional programming.}
  10.107 +Software verification tools are presently already more advanced for functional programming languages. And the theoretical background presented in this section indicates that verification tools will develop better in the paradigm of functional programming than in other paradigms. Also, efficiency considerations are expected to change in favor of the functional paradigm. Programs written on an abstract level in a TP can be transformed into efficient code by automated code generation. This transformation can potentially handle parallelization much more conveniently than programs developed within other paradigms \cite{haftmann2010code}.
  10.108  
  10.109  
  10.110  \section{Parallelism}
  10.111  \label{sec:parallelism}
  10.112 +Concurrency in software has been around for a long time and without it, multitasking would be impossible. It means that processes and threads may be started, executed and completed in overlapping time periods. This does not necessarily imply any parallel, i.e. simultaneous, execution. Parallel software is structured in a way which allows for literally simultaneous execution, e.g. on multi-core processors. Now that this kind of processors is used heavily in PCs, laptops and smartphones, efficient software must be able to exploit parallelism. Approaches to parallelism can roughly be grouped into three different levels: {\em instruction-level} (section \ref{sec:multicoreproc}), {\em task parallelism} (section \ref{sec:task_par}) and {\em data} (section \ref{sec:data_parallelism}). The following sections will establish a theoretical background of parallelism.
  10.113  
  10.114  \subsection{Gain, Cost and Limits}
  10.115 -about 1 page
  10.116 +Formulated in 1967, Amdahl's Law \cite{amdahl1967validity} describes the correlation between the inherently sequential fraction of a program and the resulting maximally possible speedup $S_A$ under parallel execution (see fig. \ref{fig:amdahls}), given by
  10.117 +\begin{equation}
  10.118 +S_A(n) = \frac{1}{1 - f + \frac{f}{n}} ,
  10.119 +\label{eq:amdahls}
  10.120 +\end{equation}
  10.121 +where $f$ is the ratio of the parts of a program which can be parallelized and $n$ is the number of processing units. The upper limit for the speedup for a theoretically infinite number of processors is therefore $$\hat{S}_A = \lim_{n \rightarrow \infty} \frac{1}{1 - f + \frac{f}{n}} = \frac{1}{1 - f} .$$
  10.122  
  10.123 -\subsection{Multicore Processors}
  10.124 -about 1 page\\
  10.125 -NOTES:\\
  10.126 -hyper threading
  10.127 +\begin{figure}
  10.128 +\centering
  10.129 +\begin{tikzpicture}[domain=1:10192,samples=2500]
  10.130 +  \begin{semilogxaxis}[
  10.131 +    xlabel=number of processors $n$,
  10.132 +    ylabel=speedup $S_A(n)$,
  10.133 +    log basis x=2,
  10.134 +    xmin=1,
  10.135 +    xmax=10192,
  10.136 +    xticklabel style={/pgf/number format/.cd,1000 sep={}},
  10.137 +    xticklabel=\pgfmathparse{2^\tick}\pgfmathprintnumber{\pgfmathresult},
  10.138 +    ymin=0,
  10.139 +    ymax=21,
  10.140 +    grid=major,
  10.141 +    legend entries={\hspace{-.7cm}{parallel portion $f$},$50\%$,$75\%$,$90\%$,$95\%$},
  10.142 +    legend style={draw=none,at={(0.95,0.85)}},
  10.143 +    title={Amdahl's Law},
  10.144 +    axis lines=left,
  10.145 +    scale=1.59
  10.146 +  ]
  10.147 +    \addlegendimage{empty legend};
  10.148 +    \addplot[color1,very thick] plot (\x,{1/(0.5 + (0.5 / \x))}) node[right]{};
  10.149 +    \addplot[color2,very thick] plot (\x,{1/(0.25 + (0.75 / \x))}) node[right]{}; 
  10.150 +    \addplot[color3,very thick] plot (\x,{1/(0.1 + (0.9 / \x))}) node[right]{}; 
  10.151 +    \addplot[color4,very thick] plot (\x,{1/(0.05 + (0.95 / \x))}) node[right]{};
  10.152 +  \end{semilogxaxis}
  10.153 +\end{tikzpicture}
  10.154 +\caption{Amdahl's Law and potential speedups.}
  10.155 +\label{fig:amdahls}
  10.156 +\end{figure}
  10.157 +
  10.158 +Amdahl's Law expects the input dataset size to be fixed. However, John L. Gustafson discovered that generally the fraction which can be computed in parallel behaves proportionally to the problem size, i.e. the bigger the input dataset, the higher the percentage of calculations which can be carried out simultaneously \cite{gustafson1988reevaluating}. Therefore Gustafson's Law approaches the problem from a different perspective. It assumes that in practice an algorithm's problem size depends on available temporal and computational resources. The resulting {\em scaled speedup} $S_G$ identifies the ratio between total and parallel execution time of an algorithm which linearly increases with the number of processors (see fig. \ref{fig:gustafsons}) and it is given by
  10.159 +\begin{equation}
  10.160 +S_G(n) = 1 - f \cdot (1 - n) .
  10.161 +\label{eq:gustafsons}
  10.162 +\end{equation}
  10.163 +
  10.164 +\begin{figure}
  10.165 +\centering
  10.166 +\begin{tikzpicture}[domain=0:512,samples=2500]
  10.167 +  \begin{axis}[
  10.168 +    xlabel=number of processors $n$,
  10.169 +    ylabel=speedup $S_G(n)$,
  10.170 +    xmin=1,
  10.171 +    xmax=512,
  10.172 +    xticklabel style={/pgf/number format/.cd,1000 sep={\,}},
  10.173 +    xticklabel=\pgfmathparse{\tick}\pgfmathprintnumber{\pgfmathresult},
  10.174 +    ymin=0,
  10.175 +    ymax=514,
  10.176 +    grid=major,
  10.177 +    legend entries={\hspace{-.7cm}{parallel portion $f$},$50\%$,$75\%$,$90\%$,$95\%$},
  10.178 +    legend style={draw=none,at={(0.42,0.95)}},
  10.179 +    title={Gustafson's Law},
  10.180 +    axis lines=left,
  10.181 +    scale=1.59
  10.182 +  ]
  10.183 +    \addlegendimage{empty legend};
  10.184 +    \addplot[color1,very thick] plot (\x,{1 - 0.5 * (1 - \x)}) node[right]{};
  10.185 +    \addplot[color2,very thick] plot (\x,{1 - 0.75 * (1 - \x)}) node[right]{}; 
  10.186 +    \addplot[color3,very thick] plot (\x,{1 - 0.9 * (1 - \x)}) node[right]{}; 
  10.187 +    \addplot[color4,very thick] plot (\x,{1 - 0.95 * (1 - \x)}) node[right]{};
  10.188 +  \end{axis}
  10.189 +\end{tikzpicture}
  10.190 +\caption{Gustafson's Law and potential speedups.}
  10.191 +\label{fig:gustafsons}
  10.192 +\end{figure}
  10.193 +
  10.194 +The two theoretical speedups do not take into account dependencies or communications of parallel tasks. These factors can cause a performance overhead which may exceed the speedup if parallelization is taken too far. A certain class of parallel problems, called {\em embarrassingly parallel problems} \cite{moler1986matrix}, do not require any communication.
  10.195 +
  10.196 +\subsection{Multi-Core Processors}
  10.197 +\label{sec:multicoreproc}
  10.198 +While the computing industry continued to fulfill Moore's predictions \cite{moore1965cramming} by fitting more and more transistors on smaller chips and thereby producing persistently faster uniprocessors, it is now physically almost impossible to make transistors any smaller \cite{geer2005chip}. Heat dissipation resulting from transistor density on the fastest chips requires extensive cooling and causes a high power consumption. Therefore hardware manufacturers have turned to building chips with multiple processing cores which may not be as fast as recent uniprocessors, but produce less heat and are more energy efficient. Architectures differ in the number of cores and their memory models. Several or all cores may share one cache or may each have their own cache. Separate caches avoid the challenges of coordinating cache accesses, but sometimes a centralized cache can be faster. Unlike multiple chips, several cores on the same chip can share memory management and memory elements and signaling between them can be faster because they are placed on the same die.
  10.199 +
  10.200 +Multi-core processors, however, could not solve all performance issues, even if software were able to fully exploit their power. Other factors like memory bandwidth can turn out to be performance bottlenecks \cite{moore2008multicore}. Also, these processors are not the only way hardware may support parallelism:
  10.201 +\begin{description}
  10.202 +\item[Bit-level parallelism] takes advantage of increased processor word sizes in order to reduce the number of instructions that need to be executed to complete operations on variables which are longer than one word \cite{culler1999parallel}.
  10.203 +
  10.204 +\item[Instruction-level parallelism] tries to discover data independent instructions which often can be reordered and computed during the same processing cycle. Also, instruction piplines with multiple stages allow different instructions to be in different stages and thereby overlap their processing \cite{hennessy2012computer}.
  10.205 +
  10.206 +\item[Simultaneous multithreading] enables multiple threads to issue instructions on a single processing core which may be processed during the same cycle \cite{tullsen1995simultaneous}.
  10.207 +\end{description}
  10.208  
  10.209  \subsection{Operating Systems}
  10.210 -about 1 page\\
  10.211 -NOTES:\\
  10.212 -I/O, streams!
  10.213 +Operating systems are heavily involved in the resource management of their underlying architecture, both for their own operations and for user applications. Besides support for simultaneous multithreading (see previous section), general-purpose OSes have to deal with diverse hardware approaches, e.g. multiprocessors vs. multi-core processors, shared vs. separate cache models. Scheduling algorithms and memory management are facing new challenges because they must balance loads between cores while avoiding cache misses and reducing the number of context switches. In this thesis we will assume symmetric multiprocessing (SMP), i.e. systems whose processing cores are identical and access the same main memory. Scheduling algorithms in OSes for more heterogenous systems also need to take into account differing performance characteristics and possibly even instruction sets of several processors.
  10.214  
  10.215  \subsection{Functional Programming}
  10.216 -about 1 page
  10.217 +\label{sec:par_funprog}
  10.218 +Imperative programming lets the programmer make the steps that the system should carry out explicit. The idea of declarative, including functional, paradigms is a description of the desired result of a computation. On the spectrum between an algorithm's formal specification and the resulting machine code, functional code is generally said to be particularly close to the formal specification \cite{henderson1986functional}. Therefore this high-level programming paradigm puts higher demands for optimization on compilers and run-time systems. This also includes the management and exploitation of multiple CPU cores.
  10.219  
  10.220 +As purely functional programming prohibits side effects, parallelizing such programs should be trivial. However, in practice, general-purpose functional programming languages are not only used for {\em embarrassingly parallel problems}. Harris and Singh \cite{harris2007feedback} identify five issues which may arise when trying to automatically parallelize functional programs:
  10.221 +\begin{enumerate}
  10.222 +  \item Inherent parallelism may be limited, i.e. call hierarchies may enforce a certain order of execution.
  10.223 +  \item Parallel executions need to be managed by the run-time system. This introduces an overhead which can even cause a slowdown if the parallelizable work is too fine-grained.
  10.224 +  \item Sometimes, purely functional code may contain side effects when it is compiled (e.g. memoization).
  10.225 +  \item Purely functional general-purpose languages generally do have means of encapsulating side effects such as I/O or updates to mutable data.
  10.226 +  \item In languages supporting lazy evaluation it is not known whether expressions actually need to be computed until their results are queried.
  10.227 +\end{enumerate}
  10.228 +For more details see chapter \ref{cha:funproglangs_mcsystems} which is entirely dedicated to parallelism in functional programming languages.
  10.229  
  10.230 -\section{Concurrency and Responsiveness}
  10.231 +\section{Concurrency, Latency and Responsiveness}
  10.232  \label{sec:concurresp}
  10.233 -about 1/2 page
  10.234 +Concurrency can improve system performance by hiding latencies. While one thread performs blocking operations such as e.g. disk I/O, another thread's execution can continue to perform other calculations independently. In cases where latencies are introduced only because a computationally complex operation claims the processor for a long period of time, multithreading cannot improve performance. But it can improve responsiveness by updating the user interface, giving feedback about pending operations' status and react to user input. Responsiveness does not necessarily require high performance, but it is a major factor of the user experience that an application is able to deliver. Concurrency can facilitate increased responsiveness. However, reasoning intuitively as well es formally about concurrent software and developing it is said to be difficult, mostly due to undetermined execution orders and thread synchronization issues \cite{sutter2005software}.
  10.235  
  10.236  
  10.237  \section{Refactoring Functional Programs}
  10.238  \label{sec:refacfunprogs}
  10.239 -about 2 pages
  10.240 +Refactoring is a process which changes the design and structure of existing software without changing its external behavior. It can result in simpler programs, remove duplicate code or prepare an extension of the program's functionality. Because refactoring is a potentially tedious and error-prone process \cite{thompson2005refactoring}, there is a variety of tools which automatically check preconditions for and apply refactoring steps for object-oriented programming languages. The number of such tools for functional programming languages, however, is limited. One of the few existing projects targets \textit{Haskell} code and is called {\em HaRe} (\textit{\textbf{Ha}skell} \textbf{Re}factorer). It follows five main strategies:
  10.241 +\begin{enumerate}
  10.242 +  \item {\bf Generalisation.} Extract some of a function's behavior into an argument, possibly a HOF, and thereby make the function more general and reusable.
  10.243 +  \item {\bf Commonality.} Similar or identical parts of a program are identified, extracted into a single function which is invoked at the respective locations and with appropriate parameters.
  10.244 +  \item {\bf Data Abstraction.} Replacing algebraic da\-ta\-types by abstract da\-ta\-types often allows programmers to modify concrete implementations without touching client code.
  10.245 +  \item {\bf Overloading.} The use of {\em type classes} and {\em instances} allows for overloading of names. This may improve readability and facilitate code reuse.
  10.246 +  \item {\bf Monadification.} Packing code into monads also makes it possible to separate parts of a system from the client code, i.e. developers can modify the monads without changing client code.
  10.247 +\end{enumerate}
  10.248 +{\em HaRe} furthermore implements structural refactorings such as deletion of unused definitions and arguments, renaming of several kinds of program items and modifications of the scopes of definitions. Later versions introduced module-aware refactorings including import/export function organization and moving definitions between modules. For more details and refactoring techniques implemented in {\em HaRe} please refer to \cite{thompson2005refactoring}.
  10.249  
  10.250 +{\em Type classes} and {\em monadification} are specific to \textit{Haskell}. In \textit{Standard ML} overloading can be achieved simply by omitting the type indicators, thanks to the Hindley–Milner type inference algorithm. Please note that \textit{Standard ML} is a strongly typed language \cite{milner1997definition}.
    11.1 --- a/doc-isac/mlehnfeld/master/thesis/funproglangs_mcsystems.tex	Thu Jun 26 10:50:27 2014 +0200
    11.2 +++ b/doc-isac/mlehnfeld/master/thesis/funproglangs_mcsystems.tex	Thu Jun 26 17:19:30 2014 +0200
    11.3 @@ -1,51 +1,198 @@
    11.4 -\chapter{Functional Programming Languages and Multicore Systems}
    11.5 +\chapter{Parallelism in Functional Programming}
    11.6  \label{cha:funproglangs_mcsystems}
    11.7 +This section presents several concepts for parallelism and concurrency which have been suggested for and implemented in functional programming languages. The central idea here is the philosophy of the declarative paradigm: The developer should not have to worry about {\em how} the parallel execution needs to be organized but at most make informed decisions on {\em what} has to be parallelized. Ideally, process management details such as process creation, resource assignment and synchronization should be taken care of by compiler and run-time system. In practice, various properties of languages such as impureness and monadic I/O, may allow for language constructs from imperative programming or even require forms of explicit synchronization.
    11.8 +
    11.9 +Many of the mechanisms presented in the following sections have been implemented in and modified for certain programming languages. It seemed appropriate to place details on these realization details and changes in the respective sections. Table \ref{tab:lang_par_overview} gives a brief overview. Note that it only contains those michanisms and sections that mention concrete implementations.
   11.10 +\begin{table}[ht]
   11.11 +\centering
   11.12 +\caption{Overview of parallelism mechanisms and referenced programming languages.}
   11.13 +\label{tab:lang_par_overview}
   11.14 +\def\rr{\rightskip=0pt plus1em \spaceskip=.3333em \xspaceskip=.5em\relax}
   11.15 +\setlength{\tabcolsep}{1ex}
   11.16 +\def\arraystretch{1.20}
   11.17 +\setlength{\tabcolsep}{1ex}
   11.18 +\small
   11.19 +\begin{tabular}{|p{0.33\textwidth}|p{0.36\textwidth}|c|}
   11.20 +  \hline
   11.21 +  \multicolumn{1}{|c}{\em Concept} &
   11.22 +  \multicolumn{1}{|c}{\em Language} &
   11.23 +  \multicolumn{1}{|c|}{\em Section} \\
   11.24 +  \hline\hline
   11.25 +  {\rr Annotations} & {\rr \textit{Clean}} & \ref{sec:annot} \\
   11.26 +  \hline
   11.27 +  {\rr Futures and Promises} & {\rr \textit{Multilisp}, \textit{Isabelle/ML}, \textit{Scala}} & \ref{sec:futurespro}, \ref{sec:actors_scala}, \ref{sec:csp} \\
   11.28 +  \hline
   11.29 +  {\rr Parallel Combinators} & {\rr \textit{Haskell}} & \ref{sec:parcomb} \\
   11.30 +  \hline
   11.31 +  {\rr Algorithmic Skeletons} & {\rr \textit{Java}} & \ref{sec:algoskel} \\
   11.32 +  \hline
   11.33 +  {\rr Data Parallelism} & {\rr \textit{Haskell}} & \ref{sec:data_parallelism} \\
   11.34 +  \hline
   11.35 +  {\rr Soft\-ware Transactional Mem\-o\-ry} & {\rr \textit{C}, \textit{Haskell}, \textit{Java}, \textit{JavaScript}, \textit{Python}, \textit{Scala}} & \ref{sec:stm} \\
   11.36 +  \hline
   11.37 +  {\rr Actors} & {\rr \textit{Erlang}, \textit{Scala}} & \ref{sec:actormodel} \\
   11.38 +  \hline
   11.39 +  {\rr Com\-mu\-ni\-cat\-ing Sequential Pro\-ces\-ses} & {\rr \textit{occam}, \textit{Limbo}, \textit{Go}} & \ref{sec:csp} \\
   11.40 +  \hline
   11.41 +\end{tabular}
   11.42 +\end{table}
   11.43 +For consistency reasons, all sample code is provided in \textit{Standard ML}, even where the demonstrated concepts are not actually available in the language.
   11.44 +%NOTES: IMPLICIT, CONTROLLED, EXPLICIT; TRANSFORMATIONAL VS. REACTIONAL;\\
   11.45 +%MISSING: EVALUATION STRATEGIES (http://hackage.haskell.org/package/parallel-3.1.0.1/docs/Control-Parallel-Strategies.html), ACTIVE DATA STRUCTURES, PIPELINE PARALLELISM
   11.46  
   11.47  \section{Implicit Parallelism}
   11.48  \label{sec:implicit_parallelism}
   11.49 -about half page
   11.50 +Due to the lack of side effects, it should be possible to compute independent subexpressions in parallel. This means that in theory, purely functional programs are parallelizable automatically by the compiler. However, the whole process is not trivial as section \ref{sec:par_funprog} already pointed out. A compiler that supports implicit parallelization must be able to perform granularity and cost analyses in order to achieve advantageous parallelizations and avoid those whose overhead would exceed their potential gain because they are too fine-grained. Additionally, compilers for lazy languages have to incorporate a strictness analysis, i.e. identify expressions which must necessarily be evaluated, such that they do not introduce an overhead by computing expressions whose results are not needed. An example for a function with a strict argument is the {\tt map} operation. If its list argument is undefined, the result of the whole function call will be undefined.
   11.51 +
   11.52 +
   11.53 +\section{Task Parallelism}
   11.54 +\label{sec:task_par}
   11.55 +Instead of automating parallelization, functional general-purpose languages such as \textit{Haskell} and \textit{Clean} provide special syntax to indicate that expressions are to be evaluated in parallel and thereby make theoretically implicit parallelism explicit.
   11.56 +
   11.57 +\subsection{Parallel {\tt let}-Expressions}
   11.58 +One proposed method are parallel {\tt let}-expressions \cite{hammer2007proposal}. {\tt let}-expressions are a common way in functional languages to allow the programmer to declare a list of values required for the evaluation of one final expression. The modified version {\tt letpar} has been suggested for the indication of a list of independent values which could potentially be computed in parallel. While this keyword is not available in \textit{Standard ML} a hypothetical use could look like the expression
   11.59 +\imlcode{~~\=le\=tpar\\
   11.60 +\>\>val x = add2 3\\
   11.61 +\>\>val y = fib 8\\
   11.62 +\>\>val z = add 7 5\\
   11.63 +\>in x + y + z end; \textrm{.}}
   11.64 +
   11.65 +\subsection{Annotations}
   11.66 +\label{sec:annot}
   11.67 +Another straight forward solution is the use of annotations. This is probably the simplest way of expressing parallelism because it does not affect the semantics of a program but only its runtime behavior. By ignoring the annotations the program can then still be compiled sequentially. This method has been implemented in the programming language \textit{Clean} \cite{nocker1991concurrent} which also allows indications of the target processing unit for the evaluation. Since eager languages evaluate a function's arguments before invoking the function itself, this mechanism, in its simple form, only works for lazy languages. A potential implementation of the parallel {\tt fib} function\footnote{see comments in appendix \ref{sec:fib-imp}} in a lazy version of \textit{Standard ML} could look somewhat like the definition
   11.68 +\imlcode{\label{alg:parannot}
   11.69 +~~\=fu\=n fib 0 = 1\\
   11.70 +\>\>| fib 1 = 1\\
   11.71 +\>\>| fib x = fib (x - 1) + par (fib (x - 2)); \textrm{.}}
   11.72 +\noindent
   11.73 +A similar construct for eager languages is slightly more involved and will be discussed in the following section.
   11.74 +
   11.75 +\subsection{Futures and Promises}
   11.76 +\label{sec:futurespro}
   11.77 +The concepts of {\em promises} and {\em futures} were first suggested in the mid 1970s \cite{baker1977incremental}. The idea is similar to annotations (see previous section). One of the first languages that adopted futures was called {\em Multilisp} \cite{halstead1985multilisp}. As the {\em Isabelle} theorem prover provides its own implementation of futures \cite{matthews2010efficient} which have been introduced to \isac{} during the parallelization process (section \ref{ssec:session-manag}), they are of special interest for this thesis. Futures are objects representing the result of an expression which may not be known initially because its computation is managed by the run-time system and may occur in parallel. {\em Implicit} futures are usually an integral part of a language and treated like ordinary references. If results are not available the first time they are requested, program execution stops until the future value has been obtained. In contrast, {\em explicit} futures require programmers to manually enforce their computations before they can access their results. {\em Isabelle}'s solution follows the latter approach and will be discussed in more detail in section \ref{sec:isafut}. The definition
   11.78 +\imlcode{\label{alg:futures}
   11.79 +~~\=fu\=n \=fi\=b \=0 = 1\\
   11.80 +\>\>| fib 1 = 1\\
   11.81 +\>\>| fib x = let\\
   11.82 +\>\>\>\>\>fun fib' () = fib (x - 2)\\
   11.83 +\>\>\>\>\>val fibf = Future.fork fib'\\
   11.84 +\>\>\>\>in fib (x - 1) + (Future.join fibf) end;}
   11.85 +\noindent
   11.86 +demonstrates the use of futures in \textit{Isabelle/ML}\footnote{see comments in appendix \ref{sec:fib-imp}}. The function \texttt{Future.fork} accepts one argument which must be a function of type \texttt{unit -> 'a}, i.e. one which accepts an empty argument ``\texttt{()}'' and returns an arbitrary type. \texttt{Future.join} requires a future value as its argument and returns the result contained in it.
   11.87 +
   11.88 +As with general evaluation strategies, futures can be determined eagerly or lazily, i.e. their computation can be started on creation of the datastructure or on demand. They are read-only placeholders. In contrast, promises are the single assignment containers which hold the future's value. Futures and promises are closely linked to Communicating Sequential Processes (section \ref{sec:csp}) and also to the Actor model (section \ref{sec:actors_scala}) and are therefore available e.g. in \textit{Scala} \cite{akka2014futures}. Although their adoption originated in functional programming, their use is not limited to this paradigm.
   11.89 +%http://en.wikipedia.org/wiki/Task_parallelism, Scala, Isabelle
   11.90 +
   11.91 +\subsection{Parallel Combinators}
   11.92 +\label{sec:parcomb}
   11.93 +\textit{Haskell}'s approach to task parallelism is the use of parallel combinators. Its run-time system manages a queue of tasks, so called {\em sparks}, whose evaluation occurs as soon as an idle CPU is detected. The language provides the keyword {\tt par}, which accepts two arguments, ``sparks'' the evaluation of the first and returns the computed result of the second. For more details see e.g. \cite{marlow2009runtime}.
   11.94 +%CAN BE EXTENDED
   11.95 +
   11.96 +\subsection{Algorithmic Skeletons}
   11.97 +\label{sec:algoskel}
   11.98 +The idea of algorthmic skeletons is simple and not limited to parallelism. Most parallel algorithms show common patterns. These patterns can be abstracted into higher-order functions. A well known example is {\em divide-and-conquer}:
   11.99 +\imlcode{~~\=fu\=n \=da\=c \=is\=\_trivial solve divide conquer x =\\
  11.100 +\>\>\>if is\_trivial x then\\
  11.101 +\>\>\>\>solve x\\
  11.102 +\>\>\>else\\
  11.103 +\>\>\>\>divide x\\
  11.104 +\>\>\>\>\>|> map (dac is\_trivial solve divide conquer)\\
  11.105 +\>\>\>\>\>|> conquer; \textrm{,}}
  11.106 +\noindent
  11.107 +where \texttt{x |> f} is syntactic sugar for \texttt{f x} and \texttt{is\_trivial} is a function with one argument and a \texttt{bool} return value. In order to parallelize the algorithm we only need to use a parallel version of {\tt map}. Other common patterns include {\em branch-and-bound} and {\em dynamic programming}. While this method is also available for imperative languages such as \textit{Java}, the fact that functional languages treat higher-order functions as first-class citizens makes them particularly suitable for the use of skeletons. Algorithmic skeletons hide the orchestration between sequential portions of code from the programmer. Sometimes the compiler can provide low-level support for the skeletons. Furthermore, multiple primitive skeletons can be composed to produce more powerful patterns. Unlike other high-level parallelism mechanisms, they are apt to cost modelling \cite{hammond2000research}.
  11.108  
  11.109  
  11.110  \section{Data Parallelism}
  11.111  \label{sec:data_parallelism}
  11.112 -about 1 page
  11.113 +% COULD BE EXTENDED (nested parallelism details, active data structures, Scala parallel collections)
  11.114 +Many operations require the application of the same function on multiple items of a data set. This can generally occur in parallel. With this kind of algorithms, the parallel portion naturally depends on the problem size. Here the same issues apply that we encountered before: The granularity often is too fine and the overhead exceeds the benefits of parallel processing. Most applications of data parallelism are based on array structures and early implementations specialized in highly parallel, high-performance and scientific computing \cite{hammond2000research}. Recent developments in GPU hardware and the arrival of general-purpose GPU programming thanks to projects such as {\em CUDA} and {\em Accelerator} \cite{tarditi2006accelerator} have fueled data parallelism research during the last few years \cite{nickolls2010gpu}. Distributed, parallel algorithms like Google's {\em MapReduce} should be mentioned here. However, we will not discuss distributed parallelism in this thesis.
  11.115  
  11.116 -
  11.117 -\section{Algorithmic Skeletons}
  11.118 -\label{sec:algoskel}
  11.119 -about 3 pages
  11.120 -
  11.121 -
  11.122 -\section{Futures and Promises}
  11.123 -\label{sec:futurespro}
  11.124 -about 2 pages
  11.125 +In 1990, Blelloch and Sabot first made a distinction between {\em flat} and {\em nested} parallelism \cite{blelloch90compiling}. Until then, data parallelism implementations had only considered sequential functions for parallel application on data sets. Nested data parallelism is significantly more complex in that it allows the application of functions which themselves are parallel. More than 15 years later, this functionality was suggested and implemented as an extension to the \textit{Haskell Glasgow Compiler} called {\em Data Parallel Haskell} \cite{peytonjones2008harnessing}.
  11.126  
  11.127  
  11.128  \section{Concurrent Functional Programming}
  11.129  \label{sec:concurrentfunprog}
  11.130 +As we saw in section \ref{sec:parallelism}, concurrency describes two or more tasks whose lifespans may overlap while parallelism is capable of exploiting multiple processing units by executing multiple tasks simultaneously. Section \ref{sec:concurresp} outlined how concurrency can be beneficial by hiding latencies and improving responsiveness. Now we want to explore concurrency models and control mechanisms in functional programming.
  11.131 +% could further be extended by process calculi
  11.132  
  11.133  \subsection{Software Transactional Memory}
  11.134 -about 2 pages
  11.135 +\label{sec:stm}
  11.136 +While hardware support for transactions had been introduced before and they were a fundamental aspect of fault tolerance in the context of database systems, software-only transactional memory was first suggested by Shavit and Touitou in 1997 \cite{shavit1997software}. Unlike {\em lock-based synchronization} STM generally is a non-blocking strategy. Transactions are sequences of read and write operations on a shared memory. They are tagged atomic blocks, processed optimistically, regardless of other threads potentially performing other alterations to the memory simultaneously. Each access to the memory is logged and once a transaction is finished, a final validation step verifies that no other thread modified any of the elements which were read during the process, i.e. the transaction's view of the memory was consistent. If there was no conflicting write access, the transaction is commited to the memory. Otherwise the whole process is repeated until no more conflicts arise. In order to be correct, transactions must be linearizable, i.e. it must be possible to execute them in a certain sequence without temporal overlaps, such that the resulting state of the shared memory is the same as if the transactions had been carried out concurrently \cite{herlihy1990linearizability}. Because of their non-blocking nature, STMs do not cause deadlocks or priority inversion. They are inherently fault tolerant to a certain extent, because transactions can be undone in the event of a timeout or an exception. Also, they are free from the tradeoffs between lock granularity and concurrency.
  11.137 +
  11.138 +Recent work on {\em commitment ordering} has helped reduce the number of transaction conflicts and thereby improve performance of STM implementations \cite{ramadan2009committing,zhang2010software}. Also, there are various STM solutions that do in fact use locking mechanisms during different phases of transaction. They can potentially reduce the number of rollbacks and consequently also make process coordination faster \cite{ennals2006software}. The significant difference between these approaches and conventional, mutex based thread synchronization mechanisms is the fact that with STM, any locking is invisible to the programmer and taken care of by the run-time system. The downside of STM is the performance overhead caused by transaction management which typically impairs the performance of programs with a low number of threads, such that they are slower than when implemented in a sequential manner. STMs, however, scale particularly well with respect to higher thread and processor numbers \cite{saha2006mcrt}.
  11.139 +
  11.140 +Several implementations for various programming languages including \textit{C}, \textit{Java}, \textit{Scala}, \textit{Python} and even \textit{JavaScript} are available. \textit{Concurrent Haskell} offers multiple language constructs built on the basis of STM to allow for composable, nested, alternative and blocking transactions as well as STM exceptions \cite{harris2005composable}.
  11.141 +%An example use of STM in StandardML could look like snippet \ref{alg:stm}.
  11.142 +%\begin{MLCode}[alg:stm]{Theoretical Software Transactional Memory use in \textit{Standard ML}}
  11.143 +%\end{MLCode}
  11.144 +
  11.145 +%CML similar approach par_conc_ml.pdf
  11.146 +%http://channel9.msdn.com/Shows/Going+Deep/Programming-in-the-Age-of-Concurrency-Software-Transactional-Memory
  11.147  
  11.148  \subsection{The Actor Model}
  11.149 -about 3 pages
  11.150 +\label{sec:actormodel}
  11.151 +In 1973, Hewitt, Bishop and Steiger suggested a modular actor formalism \cite{hewitt1973universal} which laid out the basis of what is now known as the {\em Actor model}. Several publications of the following decade formed a comprehensive Actor model theory. The two central concepts of the Actor model are {\em actors} and {\em messages} \cite{agha1985actors}. Actors are entities that can send and receive messages. Their behavior is defined by the {\bf actions} they perform on receipt of a message (e.g. create another actor) and a finite number of other actors they know about, called {\bf acquaintances}. The acquantance relation is not necessarily bidirectional, i.e. an acquantance $B$ of actor $A$ must not necessarily know actor $A$. Each actor has an immutable, unique name and a local state. Communication between actors is asynchronous. Incoming messages are processed one at a time and as a response to each message an atomic sequence of actions is performed. Since actors and messages are autonomous, encapsulated entities they are easy to reason about and can be composed into more complex systems \cite{agha1997foundation}. The order of actions affects an actor's external behavior. But because messaging occurs asynchronously, the processing order of messages is undefined. This leads to nondeterminism in computations based on the actor system. There are certain abstractions available which allow for constraints on message ordering and can therefore eliminate part of the nondeterminism \cite{karmani2009actor}. They can be summarized in four important properties:
  11.152 +\begin{enumerate}
  11.153 +\item {\bf Encapsulation.} Unlike shared memory thread models, actors are highly encapsulated. Therefore, they provide no locking or synchronization mechanisms for shared resources. Their internal states are not directly accessible by other actors. The only way for them to exchange data or influence each other is by means of messages. This limitation, however, is violated by some implementations in languages such as \textit{Scala} \cite{thurauakka}. In programming languages which permit mutable data, messages should, in theory, not transfer references to locations in a shared address space. Otherwise the receiving party could modify memory locations that are contained in another actor's state representation. However, \textit{Java}'s actor library {\em Akka} allows actors to do just that and it is the programmer's responsibility to maintain encapsulation.
  11.154 +\item {\bf Fairness.} Another important property is fairness with respect to actor sched\-ul\-ing: a message will be delivered to its target actor eventually, i.e. unless an actor dies or displays faulty behavior, it will be scheduled in a fair manner.
  11.155 +\item {\bf Location Transparency.} This property demands that an actor's unique name is independent of its location, i.e. its assigned CPU and memory can potentially belong to a remote machine and even be migrated at run time.
  11.156 +\item {\bf Mobility.} The ability for this migration can enable load balancing and improved fault tolerance. However, implementations do not necessarily satisfy these properties (e.g. \textit{Scala}). {\em Weak mobility} allows code or initial states to be moved, while {\em strong mobility} additionally supports execution state migration \cite{fuggetta1998understanding}. It has been shown that the presence of mobility can significantly improve an architecture's scalability \cite{panwar1994methodology}.
  11.157 +\end{enumerate}
  11.158 +To demonstrate the use of a theoretical actor library for \textit{Standard ML}, let us define a trivial function \texttt{greet} and execute it in the background:
  11.159 +\imlcode{~~\=fun greet name = print ("Hello " \^~name \^~"!\textbackslash{}n");\\
  11.160 +\>val greeter = Actor.create greet;\\
  11.161 +\>Actor.send greeter "John Doe";}
  11.162  
  11.163 -\subsection{Communication Sequential Processes}
  11.164 -about 1 page
  11.165 +\subsubsection{\textit{Scala}}
  11.166 +\label{sec:actors_scala}
  11.167 +\textit{Scala} is a programming language for the \textit{Java Virtual Machine} that combines concepts of object-oriented and functional programming \cite{odersky2004overview}. Its distribution comes with a toolkit for actors and message passing functionality called {\em Akka}. \textit{Scala}'s standard actors library up to version 2.9.2 \cite{scala2014migration} has now been depricated and replaced by \textit{Akka}. Its implementation is based on \textit{Erlang} like, fault tolerant actors and its execution model follows an {\em event-based} approach, thus introducing {\em inversion of control}. Compared to \textit{Scala}'s orginal actors, \textit{Akka} actors are more performant, fault tolerant and offer automatic load balancing \cite{tasharofi2013scala}. They draw on the language's functional programming features such as partial functions and, like \textit{Erlang}, pattern matching which are convenient for message passing. Actors potentially provide safer concurrency than traditional shared memory mechanisms because by design, no race conditions can occur. Also, actors are more lightweight than threads.
  11.168 +% why no race conditions!?!?!?
  11.169  
  11.170 +\subsubsection{Futures}
  11.171 +As mentioned in section \ref{sec:futurespro}, futures are related to the Actor model. \textit{Akka} provides a {\tt Future} class whose constructor accepts a block ({\tt \{...\}}) $b$ and returns immediately with a future value which is a placeholder for the computation's result. $b$ is then executed asynchronously by a thread pool managed by the run-time environment \cite{akka2014futures}. Instead of creating futures directly, \textit{Akka}'s actors have a method {\tt ?} that one can use to send a message. The method then returns a future value whose calculation can be synchronized and retrieved using {\tt Await.result}. There are further functions for composition, combination or ordering of futures as well as exception and error handling. The library also supports direct access to {\tt Promises}, the containers that complete futures.
  11.172  
  11.173 -\section{Implications on Software Design}
  11.174 -\label{sec:implications_swdesign}
  11.175 -about 3 pages
  11.176 +\subsection{Communicating Sequential Processes}
  11.177 +\label{sec:csp}
  11.178 +CSP (Communicating Sequential Processes) belong to the family of {\em process calculi} and are another model of concurrency. Process calculi are of special interest for functional programming in that they support a declarative notion of concurrency. CSP were first proposed in an acclaimed paper by Hoare in 1978 \cite{hoare1978communicating} and originally designed for reasoning about concurrent processes. Like the Actor model they utilize message passing for communication. Implementations include {\em occam} \cite{INMOS84occam}, {\em Limbo} \cite{dorward1997programming} and {\em Go} by Google \cite{pike2012go}. The main difference from the Actor model is the use of a concept named {\em channels} in CSP. Actors communicate directly with each other while CSP send and receive messages on directed channels. Another distinction is the fact that in the original CSP model, messages are delivered synchronously and instantaneously. This means that when a process sends a message to a channel, it waits until another process has received it and the receiving party blocks until there is a message available on the channel. Processes are anonymous and need to establish channels between each other in order to be able to communicate which involves a rendezvous, i.e. they need to exchange receiving and sending ends of channels to allow them to communicate. While the basic proposal suggested point-to-point channels, this can also be extended to allow multiple processes to know and use a channel's sending and/or receiving end. The developers of {\em Go} have chosen an approach that differs slightly from the original CSP model: Channels are asynchronous and buffered. They are declared to transmit a specific da\-ta\-type. Interestingly, channels are first class values. As a consequence, they can also be transmitted via channels. Using \textit{Poly/ML}'s thread library, we can utilize a supposed CSP library to achieve the same behavior we already saw with actors and invoke \texttt{greet} asynchronously:
  11.179 +\imlcode{\label{alg:csp}
  11.180 +~~\=fun greet name = print ("Hello " \^~name \^~"!\textbackslash{}n");\\
  11.181 +\>val (c\_in, c\_out) = Channel.create () : string channel;\\
  11.182 +\>fun greet\_csp () = Channel.read c\_out |> greet;\\
  11.183 +\>Thread.fork (greet\_csp, []);\\
  11.184 +\>Channel.write c\_in "John Doe";}
  11.185  
  11.186 +\subsubsection{Futures}
  11.187 +CSP, too, can simulate futures (section \ref{sec:futurespro}): A future is a channel that can transmit exactly one element and the according promise is another process that sends the result of a computation to the channel.
  11.188  
  11.189 -\section{Refactoring Functional Programs for Multicore Systems}
  11.190 +% "Squeak: a Language for Communicating with Mice"
  11.191 +% "Newsqueak: a Language for Communicating with Mice"
  11.192 +% Go concurrency: http://www.youtube.com/watch?v=3DtUzH3zoFo
  11.193 +%   slides: http://go-lang.cat-v.org/talks/slides/emerging-languages-camp-2010.pdf
  11.194 +
  11.195 +% CCS? pi-calculus?
  11.196 +% Clean, Broadcasting Sequential Processes
  11.197 +% about 1 page
  11.198 +
  11.199 +
  11.200 +\section{Refactoring for Parallelism}
  11.201  \label{sec:refac_funprogs4mcsystems}
  11.202 +Only few publications have been dedicated to refactorings for parallelism \cite{kennedy1991interactive,wloka2009refactoring,dig2011refactoring}, even less so in the context of functional programming. The {\em ParaForming} tool for \textit{Haskell} \cite{brown2012paraforming} appears to be the first attempt, followed by very recent work on \textit{Erlang} \cite{brown2013cost} and language independent approaches \cite{hammond2013paraphrase,brown2012language}. Due to the trend towards multi-, many- and even megacore machines, modern programming languages should support parallelism by design because the introduction of parallelism should not occur as an afterthought during later stages of the software development cycle. \textit{ParaForming} enforces a separation of concerns between business logic and implementation details usually taken care of by system programmers. It offers several additional refactorings for \textit{HaRe} (section \ref{sec:refacfunprogs}) to support efficient, coarse-grained task and data parallelism.
  11.203 +\begin{description}
  11.204 +\item[Introduce Task Parallelism.] This refactoring takes an expression $x$ from a function, embeds $x$ into an {\tt Eval} monad, sparks its evaluation using parallel combinators (section \ref{sec:parcomb}) and substitutes the occurrences of $x$ for the sparked computation $x'$.
  11.205 +\item[Introduce Data Parallelism] takes an expression $l$ which must be a list, replaces it by a {\tt parList}, i.e. a parallel implementation of a list supported by the standard library, and ensures the parallel versions of operations on $l$.
  11.206 +\item[Introduce Thresholding] allows for the definition of limits to the granularity of the aforementioned refactorings. Thresholding requires a designated expression $t$ and a threshold. Calls to the respective function are only evaluated in parallel if $t$ is not lower than the threshold.
  11.207 +\item[Clustering] refers to data parallelism and accepts a minimum chunk size for a parallel list. Whenever the size of a list falls below this value, the sequential versions of operations are used.
  11.208 +\end{description}
  11.209 +\noindent
  11.210 +For other refactorings, examples and a discussion of performance gains and overheads please refer to \cite{brown2012paraforming}. \textit{ParaForming} is very specific to \textit{Haskell} and its refactorings are relatively straight forward. However, they avoid common pitfalls like forgetting to substitute the use of an expression. Decisions on granularity are not made by the system, but their implementation is automated as much as possible.
  11.211  
  11.212 -\subsection{Previous Work}
  11.213 -about 4 pages
  11.214 -\subsubsection{HaRe and ParaForming in Haskell}
  11.215 +\subsubsection{Wrangler}
  11.216 +In 2013, Brown et al. documented their work on an extension for a refactoring tool for Erlang, named {\em Wrangler} \cite{brown2013cost,li2006comparative}. The provided refactorings make use of skeletons (section \ref{sec:algoskel}) and thereby facilitate informed decisions on which refactorings and skeletons to use based on cost modeling.
  11.217  
  11.218 -\subsection{Proposed Futurework}
  11.219 -about 1 page
  11.220 +% paraforming
  11.221 +% based on rewrite rules applied to abstract syntax tree (AST)
  11.222 +% refactoring is function taking a list possible rewrite rules
  11.223 +% rewrite rules have conditions for nodes which they can be applied to
  11.224  
  11.225 +% futurework: standard ml implementation
  11.226 +% "A parallel SML compiler based on algorithmic skeletons"
    12.1 --- a/doc-isac/mlehnfeld/master/thesis/hgb.sty	Thu Jun 26 10:50:27 2014 +0200
    12.2 +++ b/doc-isac/mlehnfeld/master/thesis/hgb.sty	Thu Jun 26 17:19:30 2014 +0200
    12.3 @@ -298,7 +298,7 @@
    12.4  }{}
    12.5  
    12.6  
    12.7 -\RequirePackage{color}
    12.8 +\RequirePackage{xcolor}
    12.9  \definecolor{lightgray}{gray}{0.9}
   12.10  \definecolor{midgray}{gray}{0.5}
   12.11  \color{black}
    13.1 Binary file doc-isac/mlehnfeld/master/thesis/images/QED_Editor.png has changed
    14.1 Binary file doc-isac/mlehnfeld/master/thesis/images/jedit_feedback.png has changed
    15.1 Binary file doc-isac/mlehnfeld/master/thesis/images/next_btn.png has changed
    16.1 Binary file doc-isac/mlehnfeld/master/thesis/images/sledgehammer.png has changed
    17.1 Binary file doc-isac/mlehnfeld/master/thesis/images/theories_processing.png has changed
    18.1 --- a/doc-isac/mlehnfeld/master/thesis/introduction.tex	Thu Jun 26 10:50:27 2014 +0200
    18.2 +++ b/doc-isac/mlehnfeld/master/thesis/introduction.tex	Thu Jun 26 17:19:30 2014 +0200
    18.3 @@ -4,27 +4,22 @@
    18.4  
    18.5  \section{Motivation}
    18.6  \label{sec:motivation}
    18.7 -Moore's Law \cite{moore1965cramming} states that the density of transistors which can be placed on a single chip doubles about every two years. While the size of transistors is still shrinking, clock speeds hit an upper bound around the year 2004 (fig. \ref{fig:chip-speeds}) due to thermal issues \cite{sutter2005free}. Therefore most CPUs now come with multiple processing cores. This poses a challenge to the software running on these CPUs because it must support parallel or concurrent execution in order to efficiently exploit a computer’s full processing power.\\
    18.8 -
    18.9 +Moore's Law \cite{moore1965cramming} states that the density of transistors which can be placed on a single chip doubles about every two years. Clock speeds hit an upper bound around the year 2004 (see fig. \ref{fig:chip-speeds}) due to thermal issues \cite{sutter2005free} and transistor sizes cannot be reduced any more \cite{geer2005chip}. Therefore most CPUs now come with multiple processing cores. This poses a challenge to the software running on these CPUs because it must support parallel or concurrent execution in order to efficiently exploit a computer’s full processing power.
   18.10  \begin{figure}
   18.11  \centering
   18.12  \includegraphics[width=.95\textwidth]{chip-speeds}
   18.13 -\caption{Intel CPU trends. {\it source}: \cite{sutter2005free}}
   18.14 +\caption{Intel CPU trends (source: \cite{sutter2005free}).}
   18.15  \label{fig:chip-speeds}
   18.16  \end{figure}
   18.17 +Concurrency allows for multitasking and asynchronous computations can improve responsiveness even on uniprocessors (see section \ref{sec:concurresp}). But writing efficiently parallel or concurrent software is hard \cite{sutter2005software}. Now that multi-core architectures are commonplace and higly parallel GPUs are beginning to get exploited for general-purpose computing \cite{nickolls2010gpu}, concurrent processes and threads can effectively be running simultaneously and therefore significantly faster.
   18.18  
   18.19 -Concurrency allows for multitasking and asynchronous computations can improve responsiveness even on uniprocessors. But writing efficiently parallel or concurrent software is hard \cite{sutter2005software}. Now that multicore architectures are commonplace and higly parallel GPUs are beginning to get exploited for general-purpose computing \cite{luebke2006gpgpu}, concurrent processes and threads can effectively be running simultaneously and therefore significantly faster.\\
   18.20 -Functional programming is a declarative paradigm based on the lambda calculs. It has been claimed that it is specifically suited not only for prototyping but also for efficient parallelism due to some of its defining features like immutable data, statelessness and the lack of side effects (e.g. \cite{burge1975recursive}).\\
   18.21 -\\
   18.22 -The interactive theorem prover \isabelle{} \cite{paulson1994isabelle} is a comprehensive, international software project. Its internal logic has been written entirely in {\it Standard ML} which later was embedded in a {\it Scala} environment to simplify interaction with the underlying operating system and software built on top of \isabelle{}.\\
   18.23 -As the use of multicore systems became more common, \isabelle{}'s computationally complex logic was required to exploit the newly available processing power. {\it Makarius Wenzel}, one of the leading developers of \isabelle{}, parallelized essential parts of the system within a few man months (scattered over about two years). This is a surprisingly little amount of effort considering the project's size and complexity. A significant amount of the work done was due to the use of impure code facilitated by {\it Standard ML} and {\it Scala}. {\it Isabelle/Scala} provides tools for interaction between front end and proof engine and contains object oriented components to deal with inherently stateful aspects of the development environment {\it Isabelle/jEdit}.\\
   18.24 -\\
   18.25 -The educational mathematics system \isac{} was developed at {\it Graz University of Technology} on the basis of \isabelle{} and combines a {\it Java} frontend with a logical {\it Standard ML} core \cite{neuper2001reactive}.\\
   18.26 -Both projects are case studies for the use of functional programming in practical, interactive software projects which also demonstrate how the disadvantages of this paradigm's intrinsic properties like side-effect free programming can be complemented by integrating functional and imperative programming.\\
   18.27 -Analogous to \isabelle{}'s parallelization, \isac{} was reengineered for multicore systems. The involved background knowledge, documentation and outcomes of this project form the basis for this document. These topics will be expanded to include common parallelism and concurrency mechanisms suggested and implemented for other functional programming languages as well as previous research on refactoring techniques for multicore.
   18.28 +Functional programming is a declarative paradigm based on the lambda calculs. It has been claimed that it is specifically suited not only for prototyping but also for efficient parallelism due to some of its defining features such as immutable data, statelessness and the lack of side effects (e.g. \cite{burge1975recursive}).
   18.29 +
   18.30 +The interactive theorem prover \isabelle{} \cite{nipkow2002isabelle} is a comprehensive, international software project. Its internal logic has been written entirely in {\em Standard ML} which later was embedded in a {\em Scala} environment to simplify interaction with the underlying operating system and software built on top of \isabelle{}. As the use of multi-core systems became more common, \isabelle{}'s computationally complex logic was required to exploit the newly available processing power. Makarius Wenzel, one of the leading developers of \isabelle{}, parallelized essential parts of the system within a few man months (scattered over about two years). This is a surprisingly little amount of effort considering the project's size and complexity. A significant amount of the work was required due to the use of impure code, facilitated by \textit{Standard ML} and \textit{Scala}, i.e. work concerning purification in terms of the functional paradigm. {\em Isabelle/Scala} provides tools for interaction between front-end and proof engine and contains object-oriented components to deal with inherently stateful aspects of the development environment {\em Isabelle/jEdit}.
   18.31 +
   18.32 +The educational mathematics system \isac{} is being prototyped at {\em Graz University of Technology}. The prototype combines a \textit{Java} front-end with a mathematics-engine written in \textit{Standard ML}, which reuses logical concepts and mechanisms of \isabelle{} \cite{neuper2012automated}. Both projects are case studies for the use of functional programming in practical, interactive software projects which also demonstrate how the disadvantages of this paradigm's intrinsic properties like side effect free programming can be complemented by integrating functional and imperative programming. Analogous to \isabelle{}'s parallelization, \isac{} was reengineered for multi-core systems. The involved background knowledge, documentation and outcomes of this project form the basis for this document. These topics will be expanded to include common parallelism and concurrency mechanisms suggested and implemented for other functional programming languages as well as previous research on refactoring techniques for multi-core.
   18.33  
   18.34  
   18.35  \section{Thesis Structure}
   18.36  \label{sec:thesis_structure}
   18.37 -TODO
   18.38 -
   18.39 +The remainder of this thesis is structured as follows. Chapter \ref{cha:fundamentals} establishes a theoretical background on the different areas relevant for the practical work on the case study within \isac{} and includes topics such as functional programming, computer theorem proving, parallelism, concurrency, responsiveness and refactoring of programs within the functional paradigm. Subsequently, various approaches towards parallelism that can and may have been used in functional programming are investigated in chapter \ref{cha:funproglangs_mcsystems}. The end of this chapter briefly talks about refactoring of functional programs for parallelism. Chapter \ref{cha:isabelle_isac} is then dedicated to the two main software projects involved in the practical part of this thesis, \isabelle{} and \isac{}, as well as to a discussion of what has been done in the course of the practical work. A conclusion and outlook on possible futurework are given in chapter \ref{cha:conclusion}. Appendix \ref{app:code} provides code samples in addition to those given in the main text as well as comments and further explanations. Finally, appendix \ref{app:cdrom} lists the contents of the DVD that was handed in along with the thesis.
    19.1 --- a/doc-isac/mlehnfeld/master/thesis/isabelle_isac.tex	Thu Jun 26 10:50:27 2014 +0200
    19.2 +++ b/doc-isac/mlehnfeld/master/thesis/isabelle_isac.tex	Thu Jun 26 17:19:30 2014 +0200
    19.3 @@ -1,38 +1,589 @@
    19.4 -\chapter{Isabelle and {\isac}}
    19.5 +\chapter{\isabelle{} and \isac{}}
    19.6  \label{cha:isabelle_isac}
    19.7  
    19.8 +The case study described in this chapter was performed as an
    19.9 +invasive development on \isac{}, a prototype of an educational tool for
   19.10 +applied mathematics which is built upon the theorem prover \isabelle{}
   19.11 +in order to ensure the logical correctness of calculations. The development was invasive in that it addressed basic mechanisms of \isabelle{}
   19.12 +which are reused by \isac{}. The motivation for this development was to improve
   19.13 +\isac{}'s integration with \isabelle{} as well as \isac{}'s efficiency and
   19.14 +responsiveness in a multi-user setting. Both improvements are related to
   19.15 +the introduction of parallelism in \isabelle{} during the last years.
   19.16  
   19.17 -\section{Isabelle}
   19.18 +Section \ref{sec:isabelle} introduces \isabelle{}, its basic architecture
   19.19 +and the main components it is comprised of. It furthermore discusses \textit{Standard ML} and
   19.20 +the \textit{Poly/ML} compiler it uses (section \ref{ssec:polyml}).
   19.21 +Section \ref{sec:isadevdoc} is about \isabelle{}'s development and documentation
   19.22 +workflow which has mostly been adopted for the work on \isac{}. An overview
   19.23 +of the basics of why parallelism was added to \isabelle{}'s computation model
   19.24 +in section \ref{ssec:isab-parallel} wraps up the main part about \isabelle{}. In order to make the gain from \isac{}'s improvements comprehensible,
   19.25 +section \ref{ssec:isac-design} gives some background information about
   19.26 +\isac's principal design and architecture.
   19.27 +Section \ref{ssec:isac-archi} shows \isac's
   19.28 +architecture, which had to be adhered to by the study's implementation
   19.29 +work.
   19.30 +Since the case study concerns deeply invasive development into \isac,
   19.31 +section \ref{ssec:future-isab} is dedicated to the question,
   19.32 +how \isac's prototype development relates to
   19.33 +\isabelle{}'s ongoing development, which is invasive, too.
   19.34 +The detailed descriptions of the study's implementation
   19.35 +work are section \ref{ssec:parall-thy} on the integration
   19.36 +of \isac's knowledge definitions into \isabelle{}'s parallel theory evaluation
   19.37 +and section \ref{ssec:session-manag} on the introduction of concurrency
   19.38 +to \isac's user session management.
   19.39 +
   19.40 +
   19.41 +\section{\isabelle{}}
   19.42  \label{sec:isabelle}
   19.43 +\isabelle{} is one of the leading computer theorem provers as introduced in section \ref{sec:ctp}. It is used in many academic courses, courses on semantics of programming languages, on program verification, on introduction to mathematical proof, etc. But the number of professional users can hardly be estimated. Continuous reading of the mailing list for technical support \cite{pipermail2014clisa} suggests more than a hundred persons worldwide who already did a notable development within a PhD, some academic postdoctoral work or as engineers in industry. \isabelle{} represents more than hundred man years of development, dedicated within the last twenty five years and still ongoing. \isabelle{} was started by Larry Paulson as a ``logical framework'', an answer to the issue of {\em The Next Seven Hundred Theorem Provers} mentioned on page \pageref{par:next-700}. 
   19.44  
   19.45 -\subsection{User Requirements}
   19.46 -about half page
   19.47 +Today, development is ongoing at three universities: At the {\em Computer Laboratory of the University of Cambridge}, Larry Paulson develops specific automated provers, formalizes mathematics and security protocols. {\em The Chair for Logic and Verification} at the {\em Technical University of Munich} spawned from the institute of Manfred Broy and is led by Tobias Nipkow. Nipkow is responsible for the core system as well as for general theory development. About ten postdocs and PhDs are employed for these purposes \cite{tum2013team}. The parallelization efforts and development of the \textit{Isabelle/jEdit} front-end (section \ref{ssec:isa-jedit}) are undertaken at {\em Université Paris-Sud} in the {\em Laboratoire de Recherche en Informatique}, supervised by Makarius Wenzel. Confirmed by a wide range of expectations in TP technology as mentioned in section \ref{sec:ctp}, \isabelle{} strives for usability at the workplace of computer scientists, of practitioners in various engineering disciplines and of mathematicians. E.g. at \textit{RISC Linz} there is an initiative for verified computer algebra using \isabelle{}. Usability issues were the driving force to introduce parallelism to \isabelle{}.
   19.48  
   19.49 -\subsection{Technologies}
   19.50 -about 4 pages
   19.51 +\isabelle{} integrates various tools and languages. This section describes the technologies involved in the system, its most important components and how they work together. Also, it discusses the repository, development and documentation workflow and the introduction of parallelism. The distribution for all major platforms can be downloaded from the \isabelle{} website \cite{tum2014isainst}. Whenever files are referenced, ``{\tt \textasciitilde\textasciitilde}'' stands for the distibution's root directory. These references address the \isabelle{} release {\em Isabelle2013-2}.
   19.52 +% "Isabelle as document-oriented proof assistant" paper
   19.53 +% Isabelle was not designed; it evolved. Not everyone likes this idea.
   19.54 +%Specification experts rightly abhor trial-and-error programming. They
   19.55 +%suggest that no one should write a program without first writing a complete formal specification. But university departments are not software
   19.56 +% houses. Programs like Isabelle are not products: when they have served
   19.57 +% their purpose, they are discarded.
   19.58 +% Lawrence C. Paulson, "Isabelle: The Next 700 Theorem Provers"
   19.59  
   19.60 -\subsection{Architecture}
   19.61 -about 2 pages
   19.62 +% intelligent_computer_math_isabelle.pdf p. 484-487
   19.63 +% itp-smp.pdf (survey paper)
   19.64 +% parallel-ml.pdf
   19.65 +% itp-pide.pdf
   19.66 +% isabelle_proof_method_lang.pdf
   19.67 +% polyml_concurrent_sml.pdf
   19.68  
   19.69 -\subsection{Integration of Standard ML and Scala}
   19.70 -about 1 page
   19.71 +\subsection{\textit{Isabelle/ML}}
   19.72 +\label{sec:bigisaml}
   19.73 +This section presents \textit{Isabelle/ML}, which allows programmers to embed \textit{Standard ML} code into the \isabelle{} environment. The used compiler is called \textit{Poly/ML}. It is fast and efficient, implements the full \textit{ML} standard \cite{milner1997definition} and provides additional features like a foreign language interface and support for POSIX threads based concurrency. \textit{Isabelle/ML} furthermore includes a number of libraries built on top of \textit{Poly/ML} with extra language constructs for concurrency. Before we take a deeper look into these we need to discuss \textit{Standard ML} and the \textit{Poly/ML} compiler.
   19.74  
   19.75 -\subsection{Introduction of Parallelism and Concurrency}
   19.76 -about 3 pages
   19.77 +\subsubsection{\textit{Standard ML}}
   19.78 +\label{sec:isasml}
   19.79 +\textit{ML} stands for {\em meta language}. The language was briefly introduced in section \ref{sec:funhist}. It is a strongly typed, functional programming language, which is not pure as it allows side effects and updatable references. Many people contributed ideas to the language. In order maintain a common ground, a standard was established \cite{milner1997definition}. All the code samples in this document are written in \textit{Standard ML} syntax. Since \textit{Standard ML} is only a language definition which is not dictated by a reference implementation, there are multiple compilers available, most notably {\em Standard ML of New Jersey} ({\em SML/NJ}) \cite{smlnj2013standard}, {\em MLton} \cite{weeks2006whole} and {\em Moscow ML} \cite{romanenko2014moscow}. They differ in their priorities such as optimization, standard library size or platform support. \textit{ML} was originally designed as a command language for the interactive theorem prover \textit{LCF} (section \ref{sssec:lcf}) \cite{milner1978theory}. Because of its history in this area, it was specifically suitable for the development of \isabelle{}. Inference rules can easily be implemented as functions that accept a theorem and return a new one. Complete theorems can be constructed by applying a sequence of rules to other known theorems.
   19.80  
   19.81 +\paragraph{The Hind\-ley-Mil\-ner Type System.}\label{sec:hindleymilner}
   19.82 +\textit{Standard ML} makes use of {\em Hind\-ley-Mil\-ner type inference} \cite{milner1978theory} based on a type system for the lambda calculus which utilizes parametric polymorphism, i.e. types can be declared using type variables (e.g. \textit{Synchronized} signature on page \pageref{alg:guarded_access_sig}) and the decision on the instantiated types is left to the inference algorithm. This allows for the generic definition of functions while still ensuring static type safety. The type inference method is complete and is able to always deduce the most general type of an expression without requiring type annotations. If it cannot find a suitable type for an expression, its declaration must contain errors. Because type checking in \textit{ML} is secure, deduction of theroems is always sound. Wrong applications of rules lead to exceptions.
   19.83  
   19.84 -\section{{\isac}}
   19.85 +\bigskip
   19.86 +
   19.87 +Another important feature of \textit{Standard ML} is its design for robust, large scale, modular software. Signatures are part of the type system and describe the interfaces between different modules.
   19.88 +
   19.89 +\subsubsection{\textit{Poly/ML}}
   19.90 +\label{ssec:polyml}
   19.91 +In the course of his PhD thesis, David Matthews from the \textit{University of Cambridge} developed and implemented a language called {\em Poly}, which had many ideas in common with \textit{Standard ML}. The compiler was reimplemented in \textit{Poly} itself. Later, Matthews rewrote parts of the compiler to compile \textit{Standard ML} code. The result was {\em Poly/ML} \cite{matthews1995papers} which is now available under an open source license. It implements the full \textit{Standard ML} specification and provides a few extensions. The run-time system was written in \textit{C}. \textit{Poly/ML}'s most notable feature are its concurrency mechanisms. While the \textit{Standard ML} specification \cite{milner1997definition} is limited to sequential programming, \textit{Poly/ML} provides concurrency primitives based on POSIX threads. Its native support for system threads and parallel garbage collection make it unique amongst \textit{Standard ML} compilers. Another related project is {\em Concurrent ML}, an extension of \textit{SML/NJ} \cite{reppy1993concurrent}. The primitives in \textit{Poly/ML} include threads, mutexes and condition variables. An example for forking a thread was shown in the communicating sequential process example on page \pageref{alg:csp}. For details on the operations and their implementation, please refer to \cite{matthews2010efficient}.
   19.92 +% pretty much extendable: garbage collection, memory management, libraries, more details of concurrency, foreign language interface, esp. C interface
   19.93 +
   19.94 +\subsubsection{\textit{Isabelle/ML}}
   19.95 +\label{sec:isabelleml}
   19.96 +\textit{Isabelle/ML} is a way of programming in \textit{Standard ML} which is encouraged and enhanced within the \isabelle{} infrastructure. It adds many library modules to plain \textit{Poly/ML}.
   19.97 +
   19.98 +\paragraph{Antiqutations.}\label{sec:antiquot}
   19.99 +A very powerful mechanism for easy access to \isabelle{} system values and logical entities are so called {\em antiquotations} \cite{wenzel2013impl}. Depending on their particular purpose they are resolved at compile, link or run time. They can refer to various datatypes such as {\em theorems} (derived propositions), {\em theories} (containers for global declarations), {\em contexts} (deductive environment states) and many other concepts within the \isabelle{} framework. E.g. {\tt @\{context\}} would refer to the context at compile time at the location where it is used, i.e. a \textit{Standard ML} value of type {\tt context}.
  19.100 +
  19.101 +\paragraph{Concurrency.}
  19.102 +\textit{Poly/ML} comes with very general mechanisms for concurrency. Its approach is based on shared resources and synchronized access. While this concurrency model is well established, it is easy for the programmer to overlook some detail in the coordination of threads and deadlocks can occur. In order to ensure correctness by design, \textit{Isabelle/ML} provides higher-order combinators and thereby hides synchronization details.
  19.103 +
  19.104 +\subparagraph{Guarded Access.}\label{sec:guarded-access}
  19.105 +Guarded access primitives operate on an abstract type called {\tt var}. They are available from the module {\em Synchronized}\footnote{\label{ftn:syncml}{\tt \textasciitilde\textasciitilde/src/Pure/Concurrent/synchronized.ML}} and the most notable elements of its interface are declared as
  19.106 +\imlcode{\label{alg:guarded_access_sig}
  19.107 +~~\=ty\=pe\=~'\=a \=var\\
  19.108 +\>val var: string -> 'a -> 'a var\\
  19.109 +\>val value: 'a var -> 'a\\
  19.110 +\>val guarded\_access: 'a var -> ('a -> ('b * 'a) option) -> 'b\\
  19.111 +\>val change: 'a var -> ('a -> 'a) -> unit \textrm{.}}
  19.112 +\noindent
  19.113 +This type is merely a wrapper for an ML reference. Please note the type parameters {\tt 'a} and {\tt 'b}. The constructor creates a {\tt var} from such a reference. {\tt value} is the inverse operation and performs a read access. Machine code produced by the \textit{Poly/ML} compiler generally treats memory cells as volatile, i.e. calls to {\tt value} need not be synchronized because read operations always return {\em some} value. The {\tt guarded\_access} combinator is where the magic happens. Besides the {\tt var} element $x$ with value $v$ that the write access will be performed on, it expects a function $f$ that maps $v$ to an option element of the touple type {\em 'a * 'b}. The type {\em option} is \textit{Standard ML}'s equivalent of the {\tt Maybe} monad in \textit{Haskell}: It can either be {\tt NONE} or contain a value, i.e. in our case {\tt SOME ($y$, $v2$)}. $f$ is executed in a critical section of $x$ on its value $v$. As long as it returns {\tt NONE}, the current execution context waits for another thread to change $v$. If $f$ returns {\tt SOME ($y$, $v2$)}, the value of $x$ is set to $v2$ and the call to {\tt guarded\_access} returns with result $y$.
  19.114 +The original definition of {\tt change}\footnote{{\tt \textasciitilde\textasciitilde/src/Pure/Concurrent/synchronized.ML}}, i.e. an unconditional, synchronized update to a {\tt var} element is given by
  19.115 +\imlcode{~~\=fun change\_result var f = guarded\_access var (SOME o f);\\
  19.116 +\>fun change var f = change\_result var (fn x => ((), f x)); \textrm{.}}
  19.117 +\noindent
  19.118 +It is easily possible to implement higher-order mechanisms using these primitives. See \cite{matthews2010efficient} for an example of a message queue.
  19.119 +% EXTEND: exception handling, performance
  19.120 +
  19.121 +\subparagraph{Futures.}\label{sec:isafut}
  19.122 +While the last approach allows for process synchronization on a high level, most of the complexity persists and as software gets larger it still gets increasingly hard to coordinate a high number of concurrent computations efficiently. For this reason, \textit{Isabelle/ML} provides {\em futures} (section \ref{sec:futurespro}) and is thereby able to completely hide threads and their synchronization from the programmer.
  19.123 +The goal was to provide a mechanism for parallelism which is both simple to use and performant with a special emphasis on operations typically carried out by a proof engine. {\em Future values} fulfill these requirements. They are value-oriented, i.e. the focus hereby is on parallelism rather than concurrency. Error handling happens synchronously by means of exceptions and asynchronously with interrupts.
  19.124 +The {\em Future} module\footnote{\tt \textasciitilde\textasciitilde/src/Pure/Concurrent/future.ML} has a comprehensive signature. The following declarations are only an excerpt:
  19.125 +\imlcode{~~\=type 'a future\\
  19.126 +\>val fork: (unit -> 'a) -> 'a future\\
  19.127 +\>val cancel: 'a future -> unit\\
  19.128 +\>val is\_finished: 'a future -> bool\\
  19.129 +\>val join: 'a future -> 'a\\
  19.130 +\>val map: ('a -> 'b) -> 'a future -> 'b future\\
  19.131 +\>val value: 'a -> 'a future}
  19.132 +\noindent
  19.133 +Notice that {\tt 'a} is a placeholder for an arbitrary type which a future can represent. Calls to {\tt Future.join} synchronize the evaluation and wait for the future's result if it is not already available. In case an exception is raised during the evaluation, its propagation will wait until the future is joined. {\tt Future.cancel} stops a future's evaluation if it has not finished, otherwise the function has no effect. The evaluation of futures is managed by means of tasks and worker threads. The number of workers should be related to the number of available processing cores. The exact number, however, is adjusted dynamically because the future scheduler always keeps a limited number of inactive threads in memory, such that they can immediately take over if another thread is temporarily stalled. In practice, the additional functions {\tt Future.map} and {\tt Future.value} have turned out to be useful in reducing the number of tasks and as a consequence the scheduling overhead. {\tt Future.map} can append another operation to an existing future, which will be executed on the originally scheduled function's result. If the original future's evaluation has not started yet, the appended operation will be computed within the same task. {\tt Future.value} produces a dummy future holding the function's argument as a result. Additionally, futures can be assigned priorities in order to influence execution order which by default is based on creation time. Also, futures can be organized in groups and even hierarchies of groups. This allows futures that depend on each other to be cancelled whenever one of them raises an exception.
  19.134 +
  19.135 +A simple {\em future} example has already been shown on page \pageref{alg:futures}. Internally, the {\em Futures} module is based on the guarded access mechanisms introduced above. Based on futures, \textit{Isabelle/ML} provides more specific parallel combinators such as the parallel list combinators {\tt map} and {\tt find}. The latter makes use of future groups and throws an exception as soon as a match is found, such that other search branches are not evaluated. This example shows how exceptions can be used as a basic means for functional interaction of futures without reintroducing the complexities usually associated with inter-process communication. See \cite{matthews2010efficient} for further details and a discussion of the performance of futures.
  19.136 +
  19.137 +\subsection{\textit{Isabelle/Isar}}
  19.138 +\label{sec:isaisar}
  19.139 +While in older versions of \isabelle{}, theorem proving tasks had to be solved using raw \textit{ML} tactic scripts, theories and proofs can now be developed interactively in an interpreted, structured, formal proof language called {\em Isar} (Intelligible semi-automated reasoning) \cite{wenzel2013isar,wenzel1999isar}. \textit{Isar} was designed to address an issue common to state-of-the-art theorem provers: Despite successfully formalising a good amount of existing mathematical knowledge, logics and computer science theory, it was still hard to reach potential target groups unfamiliar with the involved computer programming requirements. The primary users of theorem provers should be mathematicians and engineers who utilize them to formulate new proofs and for verification purposes. \textit{Isar} enables these users (``authors'' in fig. \ref{fig:isar-use}) to write proofs and theories in a way that can be understood by machines while being much closer to human language than \textit{ML} code or some logical calculus. As much as possible, it hides operational details thanks to its notion of {\em obvious inferences}. With a little extra care it can therefore also be used for presentation to an audience. The short, well-known proof that the square root of 2 is an irrational number is presented in appendix \ref{app:sqrt2-isar}.
  19.140 +
  19.141 +\begin{figure}
  19.142 +\centering
  19.143 +\def\svgwidth{18.5em}
  19.144 +\resizebox{.5\textwidth}{!}{\input{images/isar_use.pdf_tex}}
  19.145 +\caption{\textit{Isar} proof notions \cite{wenzel1999isar}.}
  19.146 +\label{fig:isar-use}
  19.147 +\end{figure}
  19.148 +
  19.149 +The {\em Isar virtual machine} interpreter performs proof checking and incorporates an operational semantics for \textit{Isar}. It is provided by the \textit{Isabelle/Isar} framework, which embeds Isar into the \isabelle{} environment. It supplies the \textit{Isar/VM} interpreter with all available proofs and definitions which are implemented in theory documents. These theories are structured in an inheritance hierarchy that forms a directed, acyclic graph. \textit{Isar} proofs operate on the basis of certain object logics. The most established logic is \textit{Isabelle/HOL}, i.e. higher-order logic (simply typed set theory), but a wide range of logics is available through the \textit{Isabelle/Pure} meta logic. Another important element are {\em proof methods}, which are function mappings between proof goals and according proof rules. \textit{Isar}'s lack of mechanisms for the formulation of automated proof procedures turned out to cause an unacceptable amount of duplicate code. Very recent work on a new language called {\em Eisbach}, which is based on \textit{Isar}, has tried to address these issues \cite{matichuk2014isabelle}.
  19.150 +
  19.151 +\subsection{\textit{Isabelle/Scala}}
  19.152 +\label{ssec:isa-scala}
  19.153 +The original user interface for the \isabelle{} theorem prover was a project called {\em Proof General}. It is a proof assistant front-end based on \textit{Emacs}, also available for other provers like {\em Coq} \cite{chlipala2011certified}. Its interaction model offers some improvements over a basic TTY model. The user enters a prover command and immediately receives a result, such as information on proof state or goals, before they can issue the next command. But interaction remains sequential and synchronous. There are two main weaknesses to \textit{Proof General} and similar approaches. The underlying editor framework is outdated: \textit{Emacs} offers a very powerful environment, but its graphical user interface can be considered dated. Also \textit{Emacs}' \textit{Lisp} engine does not support multithreading and does therefore not allow the development environment to make full use of the available CPU power. The other main shortcoming is the poor interaction model that is limited to a synchronous input / output sequence. Instead of simple text editing tools, a fully-featured IDE was desirable \cite{wenzel2012asynchronous}. In addition, the IDE should be available on many platforms and as the main development focus of \isabelle{} developers is theorem provers, they wanted to be able to draw on existing IDE frameworks, rather than developing a new one from scratch. They chose to make use of the \textit{Java Virtual Machine} and enable tools written for the \textit{JVM} to communicate with the \isabelle{} prover via simple byte streams. The resulting asynchronous document model utilized for the communication between \textit{ML} and \textit{JVM} allows GUIs, text editors like \textit{jEdit}, IDE frameworks such as \textit{Eclipse} as well as web services to make use of the \isabelle{} prover. The outcome of the efforts to design this {\bf p}rover {\bf IDE} framework is called {\em Isabelle/PIDE}. In order to overcome the differing programming models and datastructures of \textit{Standard ML} and the \textit{JVM}, \textit{Scala} was chosen as the \textit{JVM} communication endpoint because, due to its flexibility and functional features, it allows programmers to imitate a programming style similar to \textit{Isabelle/ML}. \textit{Scala}'s actors (section \ref{sec:actormodel}) were used heavily for asynchronous communication and concurrency until recently. Since the original Actor library was replaced by {\em Akka}, \isabelle{} developers have decided to imitate the \textit{ML} variants of parallelism mechanisms in \textit{Scala} without actors \cite{wenzel2014actors}. All datastructures that are part of the protocol have been implemented in both languages side by side. E.g. the datatype definition of file system path elements is
  19.154 +\imlcode{~~\=da\=tatype elem =\\
  19.155 +\>\>Root of string |\\
  19.156 +\>\>Basic of string |\\
  19.157 +\>\>Variable of string |\\
  19.158 +\>\>Parent;}
  19.159 +\noindent
  19.160 +in \textit{Standard ML}\footnote{\tt \textasciitilde\textasciitilde/src/Pure/General/path.ML}. Here, a value of type \texttt{elem} can be either of the four subtypes, three of which are \texttt{string}s. The equivalent datastructure in \textit{Scala}\footnote{\tt \textasciitilde\textasciitilde/src/Pure/General/path.scala} is given as four concrete classes inheriting from the abstract class \texttt{Elem}:
  19.161 +\imlcode{~~\=sealed abstract class Elem\\
  19.162 +\>private case class Root(val name: String) extends Elem\\
  19.163 +\>private case class Basic(val name: String) extends Elem\\
  19.164 +\>private case class Variable(val name: String) extends Elem\\
  19.165 +\>private case object Parent extends Elem}
  19.166 +\noindent
  19.167 +The internal protocol utilizes \textit{YXML}, a transfer format based on XML \cite{wenzel2011isabelle}. Instead of keeping a public protocol specification, the details are being kept private. This allows the developers to make substantial changes to protocol and document model to improve robustness and performance without further ado. Instead, both \textit{Isabelle/ML} and \textit{Isabelle/Scala} offer public, static APIs for their communictaion which are maintained and kept simple and stable.
  19.168 +
  19.169 +\begin{figure}
  19.170 +\centering
  19.171 +\def\svgwidth{17.5em}
  19.172 +\resizebox{.6\textwidth}{!}{\bf\input{images/pide_comm.pdf_tex}}
  19.173 +\caption{The \textit{Isabelle/PIDE} communication model \cite{wenzel2014uipide}.}
  19.174 +\label{fig:pide-comm}
  19.175 +\end{figure}
  19.176 +
  19.177 +Fig. \ref{fig:pide-comm} outlines the basic communication model between the \textit{JVM} and the prover. Whenever the user edits a file in an editor, a description of this update is sent to the prover which then produces markup to annotate the new contents with proof data and other semantic information. Then the changes on the editor side are reverted and replaced by the new, annotated content. As a consequence, the interaction mechanism adheres to the functional paradigm's philosophy by using strictly immutable document snapshots of versioned file histories.
  19.178 +% scala actors -> email!
  19.179 +%pide_ocaml.pdf
  19.180 +%isabelle_system_manual.pdf
  19.181 +%async_isabelle_scala_jedit.pdf
  19.182 +% itp-pide.pdf
  19.183 +% Walther mail 24.04. "Fwd: [isabelle-dev] scala-2.11.0" (scala actors)
  19.184 +% isabelle-doc.pdf (YXML)
  19.185 +
  19.186 +\subsection{\textit{Isabelle/jEdit}}
  19.187 +\label{ssec:isa-jedit}
  19.188 +Recent \isabelle{} distributions include a complete prover IDE prototype implementation based on the text editor framework {\em jEdit} \cite{wenzel2012isabelle}. \textit{Isabelle/jEdit} features a completely new interaction model that performs continuous, parallel proof checking. It uses GUI functionalities such as highlighting, coloring, tooltips, popup windows and hyperlinks to annotate user code written in \textit{Isar} and \textit{ML} incrementally with semantic information from the prover (see fig. \ref{fig:jedit_feedback}). Prover and editor front-end are executed independently and never block each other thanks to the asynchronous protocol. Additionally, \textit{Isabelle/jEdit} includes several plugins and fonts to facilitate entering and rendering of mathematical symbols.
  19.189 +
  19.190 +\begin{figure}
  19.191 +\centering
  19.192 +\includegraphics[width=94mm]{jedit_feedback} %width=.95\textwidth
  19.193 +\caption{GUI feedback in \textit{Isabelle/jEdit}.}
  19.194 +\label{fig:jedit_feedback}
  19.195 +\end{figure}
  19.196 +
  19.197 +The IDE also manages loaded theory files and the computation of their dependency graph. Changes to theories are automatically propagated appropriately and the user receives immediate feedback on the processing of theories (fig. \ref{fig:jedit_thy}).
  19.198 +\begin{figure}
  19.199 +\centering
  19.200 +\includegraphics[width=80mm]{theories_processing} %width=.95\textwidth 70mm
  19.201 +\caption{Feedback on theory processing \textit{Isabelle/jEdit}.}
  19.202 +\label{fig:jedit_thy}
  19.203 +\end{figure}
  19.204 +%async_isabelle_scala_jedit.pdf
  19.205 +%jedit.pdf
  19.206 +
  19.207 +\subsection{The Prover Framework}
  19.208 +\label{sec:isaatp}
  19.209 +% isabelle/pure vs. isabelle/hol and other logics
  19.210 +% proof general
  19.211 +% pure framework
  19.212 +% graphics Isar-slides-TPHOLs99.ps.gz
  19.213 +%hol_springer.pdf:361
  19.214 +%isahol_proofdisproof.pdf
  19.215 +%itp-pide.pdf:5
  19.216 +%isabelle_tutorial.pdf
  19.217 +%isar_sml_antiquot.pdf:6
  19.218 +%Isar-slides-TPHOLs99.ps.gz:2
  19.219 +%isa_pide_educational.pdf:146
  19.220 +
  19.221 +% unlimitide undo, document model immutable: previous states are preserved and full history is saved. unreachable states are garbage collected
  19.222 +% cross-link step 2 under "integration with isabelle's par..." annotations
  19.223 +The paragraph titled {\em The Next Seven Hundred Theorem Provers} in section \ref{par:next-700} (page \pageref{par:next-700}) explained that \isabelle{} was initiated and designed as a logical framework for automated theorem provers. Its design is described in \cite{wenzel2009parallel}. The metaprover / collection of external automated provers supporting interactive proofs is called {\em Sledgehammer}. The standard provers are called {\em E}, {\em SPASS}, {\em Vampire} and {\em Z3}. Fig. \ref{fig:sledgehammer} (page \pageref{fig:sledgehammer}) depicts the view on \textit{Sledgehammer} in \textit{Isabelle/jEdit}.
  19.224 +\begin{figure}
  19.225 +\centering
  19.226 +\includegraphics[width=.95\textwidth]{sledgehammer}
  19.227 +\caption{instance of the \textit{Sledgehammer} panel.}
  19.228 +\label{fig:sledgehammer}
  19.229 +\end{figure}
  19.230 +In those cases where theorems are invalid and therefore cannot be proved, two disproof tools called {\em Quickcheck} and {\em Nitpick} can be utilized for producing counterexamples. See \cite{blanchette2011automatic} for a more detailed overview on the integration of these external tools.
  19.231 +% proof planner Isa?
  19.232 +
  19.233 +%\begin{figure}
  19.234 +%\centering
  19.235 +%\includegraphics[width=.65\textwidth]{lcf_concepts.png}
  19.236 +%\caption{Main concepts of the LCF approach \cite{brandt2007theorem}}
  19.237 +%\label{fig:lcf-concepts}
  19.238 +%\end{figure}
  19.239 +
  19.240 +%\begin{figure}
  19.241 +%\centering
  19.242 +%\includegraphics[width=.65\textwidth]{isaisar_concepts.png}
  19.243 +%\caption{Main concepts of Isabelle/Isar \cite{brandt2007theorem}}
  19.244 +%\label{fig:isaisar-concepts}
  19.245 +%\end{figure}
  19.246 +
  19.247 +
  19.248 +\subsection{Development and Documentation}
  19.249 +\label{sec:isadevdoc}
  19.250 +Apart from the technologies used in \isabelle{}, also \isabelle{}'s development process and system documentation had an impact on this thesis. The adaption to \isabelle{}'s specifics was quite challenging. \isabelle{}'s development process is much faster and much more radical as compared to commercial software development due to the specific kind of development. This development is led by three persons (see introduction to section \ref{sec:isabelle}, two of them supervising and also writing code themselves) and executed by a small team of about ten postdocs and PhDs, each working under contracts for three years at least. The members grow into the team by doing projects and master theses before entering the team. According to my supervisor, this particular kind of development leads to the following way of system documentation which is given by:
  19.251 +\begin{enumerate}
  19.252 +\item{\bf The code} itself, without any further comments except in few specific cases. This is common to functional programming and very different from what one is used to from e.g. \textit{Javadoc}. The code can be inspected in the repository, see pt. \ref{enum:isa-repos} below.
  19.253 +\item\label{enum:isa-man}{\bf Reference manuals and tutorials}, which can be found online \cite{tum2013isadoc} and are easily accessible from \textit{Isabelle/jEdit} (section \ref{ssec:isa-jedit}). Both, manuals and tutorials are automatically held in sync with the code by specifically developed mechanisms such as antiquotations (section \ref{sec:bigisaml}, page \pageref{sec:antiquot}), e.g. {\tt @\{type \dots\}}, {\tt @\{const \dots\}}, etc. These entries in the documentation raise exceptions in case some type or constant has changed in the code.
  19.254 +\item\label{enum:isa-repos}{\bf The repository}, publicly readable \cite{tum2014repo} is the third pillar for documentation. Commits must follow a {\em minimal changeset} policy which allows for quick queries and documents particular interdependencies across several files. This kind of documentation of complicated connections is very efficient and always up to date even with frequent changes.
  19.255 +\end{enumerate}
  19.256 +\noindent
  19.257 +This documentation model has evolved during more than two decades and still allows for ongoing rapid development with invasive changes, last but not least parallelization. However, relevant documenation for this thesis is marginal. Parallelization, as already mentioned, has been done by Makarius Wenzel and was preceeded by a feasibility study conducted by a master student \cite{haller2006object}. Wenzel's respective papers \cite{matthews2010efficient,wenzel1999isar,wenzel2012asynchronous, wenzel2011isabelle,wenzel2012isabelle} concern fundamental considerations. The most relevant reference manual \cite{wenzel2013impl} contains less than ten pages on parallelization. All the rest had to be found in the code (several hundred thousand LOCs\footnote{{\em lines of code}}). The gap between papers and code was hard to bridge.
  19.258 +
  19.259 +\subsection{Reasons for Parallelism in \isabelle{}}
  19.260 +\label{ssec:isab-parallel}
  19.261 +\isabelle{} is the first theorem prover which is able to exploit multi-core hardware. The reasons for introducing parallelism in \isabelle{} are directed towards the future and complement considerations discussed for functional programming. There are three motivations for parallelization mentioned for \isabelle{}. They have all been addressed at once: parallel invocation of multiple automated provers, a responsive prover IDE and efficient theory evaluation envisage better usability in engineering practice. All three development issues are successfully being tackled, an end cannot be foreseen yet.
  19.262 +
  19.263 +\subsubsection{Run Automated Provers in Parallel}
  19.264 +\isabelle{}'s collection of automated provers was mentioned in section \ref{sec:isaatp}. It still has a mechanism to connect with remote prover instances and these already featured parallel access to external resources. Since personal computers and laptops have become powerful enough to run \isabelle{}, this hardware has also become powerful enough to run automated provers. But running them in parallel on a local machine required a redesign.
  19.265 +
  19.266 +\subsubsection{Provide a Tool Usable in Engineering Practice}
  19.267 +The paragraph about {\em Continuous Pentration of Computer Science} by formal methods in section \ref{par:invade-cs} (page \pageref{par:invade-cs}) mentioned the increasing demand for theorem provers in the practice of software engineering and related mathematics. In particular, software engineers are used to comprehensive support by integrated development environments (IDEs): interlinked code, various views on relations between code elements, etc. Since TPs have been used by experts only until the presence, specific user interfaces were sufficient. For \isabelle{} this was {\em Proof General} (see section \ref{ssec:isa-jedit}), which is based on \textit{Emacs}. Those who still use this editor which was dominant during the last decades, know how different \textit{Emacs} is: not reasonable for engineering practice of today. So the issue was to meet the usability requirements of contemporary engineering practice. \isabelle{}'s attempt to meet such requirements is the {\em PIDE} project (section \ref{ssec:isa-scala}). \textit{Isabelle/jEdit} (section \ref{ssec:isa-jedit}) is the reference implementation, which is accompanied by development of an \textit{Eclipse} plugin \cite{andriusvelykis2013isaclipse} and a browser-based front-end under construction.
  19.268 +
  19.269 +\subsubsection{Accelerate theory evaluation}
  19.270 +My supervisor's first experience with \isabelle{} was about fifteen years ago on a high-end SPARK workstation. The initial evaluation of \textit{Isabelle/HOL} took more than twenty minutes; not to speak of the considerable amount of time an experienced system administrator needed for installation. Since this time \textit{Isabelle/HOL} has become about three times larger, but the initial evaluation of theories takes about two minutes for recent \isabelle{} releases. \textit{Isabelle/jEdit} represents theory evaluation as shown in fig. \ref{fig:jedit_thy} on page \pageref{fig:jedit_thy}. Parallel branches in the dependency graph are evaluated in parallel. Without this kind of speedup, large theory developments involving several dozens of theories would not be manageable. The most recent \isabelle{} releases do not package binaries of evaluated theories any more. Instead, they pack further components into the bundle. The bundle of the current release is more than 400MB for download. Also, all other bandwidth aspects of \isabelle{} are at the limits of present mainstream hardware. \isabelle{} does not run below 4GB main memory and requires a dual-core processor with more than 2GHz per core.
  19.271 +
  19.272 +
  19.273 +\section{\isac{}}
  19.274  \label{sec:isac}
  19.275 +\isac{} is a prototype in both, in the mathematics-engine based on \isabelle{} and written in \textit{Standard ML} as well as in the front-end involving rule-based dialogs and written in \textit{Java}. The mathematics-engine heavily reuses technology from \isabelle{} and thus comprises a comparably small portion of code: 25,000 LOCs\footnote{{\em lines of code}} plus 15,000 LOCs of knowledge in theories compiled to \textit{Standard ML} for fast access and 40,000 LOCs for tests. This code features functionality which is radically new for educational mathematics systems. The gain from TP technology is explained in section \ref{ssec:isac-design}. The front-end exploits the mathematics-engine's TP-based functionality for dialogs on a new level of flexibility and user guidance. The gain from rule-based dialog guidance is addressed in section \ref{ssec:isac-archi}. Since \isabelle{} is still under rapid development, the principal question arises, whether \isac's prototype development can persistently adjust to \isabelle{}, an issue addressed in section \ref{ssec:future-isab}. Otherwise the success achieved with the case study on parallelization would be worthless in the long term. The description of the case study is comprised of two parts: the integration with \isabelle{}'s parallel theory evaluation is described in section \ref{ssec:parall-thy} and section \ref{ssec:session-manag} discusses the parallelization of \isac{}'s user session management.
  19.276  
  19.277 -\subsection{Philosophy and User Requirements}
  19.278 -about 2 pages
  19.279 +\subsection{Principal Design Decisions}
  19.280 +\label{ssec:isac-design}
  19.281 +\isac{} is still the only educational mathematics system worldwide which adheres to concepts of TP and which is built reusing TP technology, except for a system under construction by the Scandinavian \textit{E-Math} project \cite{emath2013emath}. All the other math tutoring systems are based on computer algebra. Thus, the distinctive design decision to rely on TP deserves an explanation.
  19.282  
  19.283 -\subsection{Initial Architecture}
  19.284 -about 2 pages
  19.285 +\paragraph{TP concepts ensure logical correctness} in stepwise problem solving. When learning independently from a human tutor or supervisor, a student wants to know whether the calculations are done correctly or not. An educational software system supporting this expectation cannot be based on computer algebra systems. These are in no way related to logics (which is an unresolved issue for computer algebra, for instance at \textit{RISC Linz}), so true / false or correct / not correct requires underpinning with logical concepts as is the case e.g. in the {\em Theorema} project at \textit{RISC Linz}. The system under construction in \textit{E-Math} supports so called {\em structured derivations} which are close to calculations written by hand (see fig. \ref{fig:qed-editor}).
  19.286  
  19.287 -\subsection{Integration with Isabelle\'s Parallelism}
  19.288 -about 2 pages
  19.289 +\begin{figure}
  19.290 +\centering
  19.291 +\includegraphics[width=.95\textwidth]{QED_Editor}
  19.292 +\caption{Structured derivation editor (source: \cite{back2010structured}).}
  19.293 +\label{fig:qed-editor}
  19.294 +\end{figure}
  19.295  
  19.296 -\subsection{Concurrent User Session Management}
  19.297 -about 5 pages
  19.298 +\noindent
  19.299 +This kind of calculation is proved logically equivalent \cite{back2010structured} to natural deduction, which forms the logical foundation of \isabelle{} and other provers as mentioned in section \ref{sssec:holfam}.
  19.300  
  19.301 +With the decision for \isabelle{}, \isac{} intentionally dissociates itself from the kind of artificial intelligence that tries to imitate human behavior. \isac{} does not intend to mimic a human tutor by modeling the system with respect to human thought, human comprehension or human learning. Rather, \isac{} is designed as a model of mathematics, following the structure of mathematics from the very bottom, from formal logics as implemented in \isabelle{}. However, AI technologies can take profit from the strength, the generality and the flexibility of the mathematics-engine resulting from this design. The principal strengths of \isac's math engine are described in the following paragraphs.
  19.302 +
  19.303 +\paragraph{TP technology features new functionality} of \isac's mathematics-engine. The following features arise naturally from TP:
  19.304 +\begin{description}
  19.305 +\sloppy
  19.306 +\item[Check students' input automatically, flexibly and reliably.] Given a problem of applied mathematics by a formal specification (input, output, precondition and postcondition; generally hidden from the student), any formula (or a theorem for application) input by the student establishes a proof situation. Automated provers then derive, if possible, the input from the logical context. This is what computer theorem provers are built for. They represent the technology for accomplishing this task as generally and reliably as possible at the present state of the art in computer mathematics.
  19.307 +\item[Give explanations on request by students.] Provers designed on the basis of the \textit{LCF principle} (section \ref{sssec:lcf}) such as \isabelle{}, derive all knowledge from ``first principles''. For each definition and for each theorem the respective prerequisites can be traced down to the most fundamental axioms. All this knowledge is represented in human readable form \cite{tum2013isahol}. Thus the situation in systems like \isabelle{} is quite different from computer algebra systems. While documentation for the latter requires additional efforts for creation, all documentation is already there in \isabelle{}. Generation of explanations calls for filtering out details, which could distract or overwhelm students. Another nice feature of \textit{Isabelle/Isar}, although not relevant for applied mathematics, are human readable proofs close to mathematical traditions (see section \ref{sec:isaisar} and appendix \ref{app:sqrt2-isar}).
  19.308 +\item[Propose a next step if students get stuck.] This feature seems indispensable for independent learning. However, this exceeds the traditional discipline of TP. E.g., {\it lemma} $\int{1+x+x^2\;{\it dx}} = x + \frac{x^2}{2} + \frac{x^3}{3} + c$ can be proved, but given $\int{1+x+x^2\;{\it dx}}$ there is no direct way to calculate the result $x + \frac{x^2}{2} + \frac{x^3}{3} + c$ in TP. Since indispensable, \isac{} provides this feature by a novel combination of deduction and computation, called {\em Lucas-Interpretation} \cite{neuper2012automated}. \textit{Lucas-Interpretation} works on programs which describe how to solve a problem of applied mathematics in a quite traditional way. In addition to maintaining an environment as usual for interpretation, \textit{Lucas-Interpretation} maintains a logical context, such that automated prov\-ers are provided with all current knowledge required to check user input. \cite{daroczy2013error} gives a brief introduction to \textit{Lucas-Interpretation}.
  19.309 +\end{description}
  19.310 +\noindent
  19.311 +These three features together establish a powerful interface to the mathe\-matics-engine. With these given, interaction can be analogous to learning how to play chess in interaction with a chess program. The player executes a move (the student inputs a formula) and the program checks the correctness of the move (\isac{} checks the input). The program performs a move (\isac{} proposes a next step) and the player accepts or changes it. If the player is in danger of losing the game (the student gets stuck within the steps towards a problem solution), they can go back a few moves and try alternative moves. Or the player can even switch roles with the system and watch how the system copes with the unfortunate configuration. All these choices are available to a student analogously when learning mathematics in interaction with \isac.
  19.312 +
  19.313 +The comparison of interaction in \isac{} with interaction in playing chess makes clear, that learning with a mechanical system does not necessarily lead to drill and practice in operational mechanisation. The degree of freedom in interaction allows for fostering of the evaluation of alternatives and thus learning by trial and error, comparing strategies, etc.
  19.314 +
  19.315 +\subsection{Architecture Separating Mathematics and Dialogs}
  19.316 +\label{ssec:isac-archi}
  19.317 +The strengths of \isac's mathematics-engine, as described above, provide powerful features for human-machine interaction and thus call for concepts and technology from AI. In order to cope with increasing complexity, a separation of concerns between AI and mathematics is required.
  19.318 +
  19.319 +\subsubsection{Separation of Concerns between Dialogs and Mathematics}
  19.320 +This separation is reflected in \isac's architecture shown in fig. \ref{fig:isac-archi}.
  19.321 +\begin{figure}
  19.322 +\centering
  19.323 +\def\svgwidth{32.5em}
  19.324 +\resizebox{\textwidth}{!}{\input{images/isac_arch.pdf_tex}}
  19.325 +\caption{\isac{}'s architecture as seen by users.}
  19.326 +\label{fig:isac-archi}
  19.327 +\end{figure}
  19.328 +The front-end on the left within a \textit{Java} environment comprises {\em GUI 1..n}, an arbitrary number of remote devices for input and output, the \isac{} {\em server} with the {\em dialog} component. The {\em dialog} component comprises a rule-based system as described in \cite{kienleitner2012towards}, not shown in the figure. {\em Dialog authors} are not concerned with mathematics and are free to focus on the complexity of dialog guidance, see \cite{daroczy2013error} and an example from the paper in fig. \ref{fig:interaction}.
  19.329 +\begin{figure}
  19.330 +\centering
  19.331 +\begin{tikzpicture}[
  19.332 +  font=\footnotesize,
  19.333 +  >=latex,
  19.334 +  arr/.style = {<-,shorten <=2.5em, shorten >=2em,color=color1,line width=1.5pt},
  19.335 +  block/.style = {align=center, anchor=north, inner sep=2},
  19.336 +  formula/.style = {font=\large}
  19.337 +]
  19.338 +
  19.339 +  \node [rectangle,rounded corners,very thick,draw=color1,block,formula,inner sep=5] (a) {$\frac{6a+3b}{3b} =\;?$};
  19.340 +
  19.341 +  \node [anchor=north,minimum width=14em,minimum height=18em,fill=color5] (student) at (-3.3,-1.5) {};
  19.342 +  \node [anchor=north,minimum width=14em,minimum height=18em,fill=color5] (computer) at (3.3,-1.5) {};
  19.343 +
  19.344 +  \node [anchor=north west,color=color2,font=\bf] (slabel) at (-5.96,-1.55) {student};
  19.345 +
  19.346 +  \node [anchor=north west,color=color2,font=\bf] (clabel) at (0.69,-1.57) {computer};
  19.347 +
  19.348 +
  19.349 +  \node [block,formula] (b) at (-3.3,-2.4) {$\frac{6a+3b}{3b} = 6a$}
  19.350 +    edge[arr] (a);
  19.351 +
  19.352 +  \node [block,text width=13em] (c) at (3.3,-2)
  19.353 +{\begin{center}
  19.354 +``You can't simplify like that.\\
  19.355 +Try to make products out of both,\\
  19.356 +nominator and denominator.\\
  19.357 +\medskip
  19.358 +Here is a link to look up.''
  19.359 +\end{center}}
  19.360 +    edge[arr] (b);
  19.361 +
  19.362 +  \node [block,below=2.2em of b] (d) {\includegraphics[width=5.3em]{next_btn}}
  19.363 +    edge[arr] (c);
  19.364 +
  19.365 +  \node [block,formula,below=2em of c] (e) {$\frac{6a+3b}{3b} = \frac{..(..+..)}{..}$}
  19.366 +    edge[arr] (d);
  19.367 +
  19.368 +  \node [block,below=2.2em of d] (f) {\includegraphics[width=5.3em]{next_btn}}
  19.369 +    edge[arr] (e);
  19.370 +
  19.371 +  \node [block,formula,below=2em of e] (g) {$\frac{6a+3b}{3b} = \frac{3(2a+..)}{..}$}
  19.372 +    edge[arr] (f);
  19.373 +
  19.374 +  \node [block,formula,below=2.2em of f] (h) {$\frac{6a+3b}{3b} = \frac{2a+b}{b}$}
  19.375 +    edge[arr] (g);
  19.376 +
  19.377 +\end{tikzpicture}
  19.378 +\caption{Interaction with \isac{} \cite{daroczy2013error}.}
  19.379 +\label{fig:interaction}
  19.380 +\end{figure}
  19.381 +In fig. \ref{fig:isac-archi}, the server accesses the {\em knowledge} in HTML representation, both in turn administered by a learning management system like {\em moodle}.
  19.382 +Separated from the front-end concerned with dialogs etc. is the \textit{Standard ML} environment, the implementation language of \isabelle{} and \isac{}: the {\em math-engine} together with {\em programs} for \textit{Lucas-Interpretation} and {\em knowledge} in \textit{Standard ML} representation for fast access. The math-engine builds upon {\em Isabelle} by reusing the term structure of simply typed lambda calculus and respective parsing, pretty printing, matching as well as management of {\em knowledge} within theories. The orange color in fig. \ref{fig:isac-archi} indicates parallelization in order to exploit availability of an arbitrary number of cores ({\em core 1..n}). In 2009, \isabelle{} introduced specific mechanisms to cope with multi-core environments. The central \isabelle{} component for parallelization is the {\em Scala bridge} (section \ref{ssec:isa-scala}) supporting asynchronous user interaction \cite{wenzel2012asynchronous} and parallel proof checking \cite{wenzel2009parallel} via the editor {\em jEdit} (section \ref{ssec:isa-jedit}). \isabelle{}'s parallel mechanisms are reused in the case study. The horizontal orange bar above {\em Isabelle} indicates that parallelization of \isabelle{} affected \isac's management of {\em knowledge} in \textit{Standard ML} representation. This part of the case study is described in section \ref{ssec:parall-thy}. The vertical orange bar to the left of the {\em \isac{} math-engine} indicates the concurrent user session management accomplished by the second part of the case study, which is described in section \ref{ssec:session-manag}.
  19.383 +
  19.384 +\subsubsection{Modeling the Universe of Mathematics}
  19.385 +This goes beyond \isabelle{}'s way of modeling the deductive dimension of knowledge. Due to a suggestion by Bruno Buchberger in 2001, \isac{} now implements two additional dimensions, together spanning a three-dimensional space:
  19.386 +\begin{enumerate}
  19.387 +\item\label{enum:thy}{\bf Deductive knowledge} is defined and managed in TP. Beginning with the basic axioms (axioms of higher-order logic in \textit{Isabelle/HOL} used by \isac), definitions are given and theorems about these definitions are proved, theory by theory. \isabelle{} provides an elegant mechanism for defining new syntax, which results in a language close to traditional mathematical notation. The subset of \isabelle{} knowledge presently in use by \isac{} can be found online \cite{isac2014knowledge}.
  19.388 +\item\label{enum:pbl}{\bf Application-oriented knowledge} is given by formal specifications of problems in applied mathematics. The specification of all problems, which can be solved automatically by \isac{} (with the help of algorithms, see below), is available online \cite{isac2014pbl}.
  19.389 +This collection is structured as a tree. Given a tree, automated problem refinement is possible, e.g. starting from the specification at the node of {\em univariate equations} \cite{isac2014pblequuniv} a certain equation can be matched with all child nodes until the appropriate type of equation has been identified. In case a match has been found, such a node contains a pointer to a program solving the specific type of equation.
  19.390 +This kind of problem refinement makes transparent, what is done by computer algebra systems as black boxes.
  19.391 +\item\label{enum:met}{\bf Algorithmic knowledge} is given by programs to undergo \textit{Lucas-Interpretation}. The collection of programs is structured as a tree. However, principal requirements on the structure are still unclear. There are ideas of how to group algorithms in \cite{buchberger1984mathematik} but the ideas are too vague for mechanisation. \isac's programs can be found online \cite{isac2014met}.
  19.392 +\end{enumerate}
  19.393 +The structure of the knowledge collections of pt. \ref{enum:pbl} and \ref{enum:met} are different from pt. \ref{enum:thy}. E.g. linear equations in pt. \ref{enum:pbl} are the same for rational numbers and complex numbers, but the definition of rationals is far away from complex numbers in pt. \ref{enum:thy}.
  19.394 +So the implementation of the latter two collections is orthogonal to the first, the collection structured and managed by \isabelle{}. The only way of implementation was by {\tt Unsynchronized.ref}, i.e. by global references. Write access to these references assumed a certain sequence in evaluation of the theories. This assumption broke with the introduction of parallel execution in \isabelle{}. Section \ref{ssec:parall-thy} explains how this issue was resolved.
  19.395 +
  19.396 +\subsection{Relation to Ongoing \isabelle{} Development}
  19.397 +\label{ssec:future-isab}
  19.398 +In my Bachelor's project \cite{lehnfeld2011verbindung} I already had the chance to contribute to \isac{}. From my experience with \isac{} I can give the following overview of \isac{}'s development.
  19.399 +
  19.400 +Although being active for more than two decades, \isabelle{} development still undergoes invasive reforms concerning localization, parallelization, etc. See the NEWS file \cite{isabelle2013news} for details. These reforms cause significant changes in module signatures and in functionality more or less with each release. Many of these changes enforce considerable updates of \isac{} because \isac{} extracts functionality from \isabelle{} at a rather deep level, apart from public interfaces.
  19.401 +Updating \isac{} in accordance to the regular \isabelle{} releases is crucial: Between 2002 and 2010 \isac{} development focused on the front-end. In 2010, \textit{Lucas-Interpretation} \cite{neuper2012automated} was clarified and \isabelle{}'s contexts appeared indispensable. Contexts appeared in \isabelle{} after 2002, so an update was necessary. However, updating \isac's math-engine from \textit{Isabelle2002} to \textit{Isabelle2009-2} turned out to be very hard and it was done during and after my Bachelor's project.
  19.402 +Ever since this unpleasant experience \isac{} has been updated with each official \isabelle{} release which are published every eight to twelve months. In addition to these regular updates there are major tasks concerning further adoption of \isabelle{}'s mechanisms which are open due to early design decisions of \isac.
  19.403 +
  19.404 +\paragraph{Adopt logical contexts from \isabelle{}.} 
  19.405 +\isac's mathematics-engine was developed between 2000 and 2002, long before \isabelle{} introduced logical contexts \cite{wenzel2013impl}. Without these contexts the user needed to explicitly fix the type of variables during input. This was very inconvenient. I implemented my Bachelor's project in time to introduce contexts to \isac{} and gain first experiences with the system.
  19.406 +Experience with contexts also contributed to the theoretical clarification of \textit{Lucas-Interpretation} \cite{neuper2012automated}.
  19.407 +
  19.408 +\paragraph{Adapt to \isabelle{}'s parallel theory evaluation.}
  19.409 +Since \isabelle{} traverses its theory dependency graph in parallel, \isac{} can no longer rely on a particular, deterministic execution order. Fixing the problems which arose from this fact and integrating the related datastructures into \isabelle{}'s theory data management model is one of the big achievements of the project work that this thesis is based on. The details of this process are outlined in section \ref{ssec:parall-thy}.
  19.410 +
  19.411 +\paragraph{Exploit parallelism for concurrent session management.}
  19.412 +This task addresses the utilization of \isabelle{}'s new mechanisms for parallelism, most notably futures, for \isac{}'s user session management in order to allow for a high number of concurrent, efficiently executed calculations. Section \ref{ssec:session-manag} documents this step in more depth.
  19.413 +
  19.414 +\paragraph{Make \isac's programming language usable.} \isac's programming language was implemented between 2000 and 2002, long before \isabelle{}'s function package was available \cite{krauss2013defining}. \textit{Lucas-Interpretation} on programs in this language uses a variety of rulesets for rewriting, too complicated for an average programmer \cite{rocnik2012interactive}.
  19.415 +Considerable simplification of programming can be expected by narrowing \isac's programming language towards \isabelle{}'s function package. In particular, components of \isabelle{}'s code generator \cite{haftmann2010code} promise to greatly improve computational efficiency.
  19.416 +
  19.417 +\paragraph{Adopt \isabelle{}'s numeral computation for \isac{}.}
  19.418 +The initial conception of \isac{} emphasized applied mathematics, which is unrealistic without floating point numbers. There were early attempts to implement the latter together with complex numbers. Both, floating point numbers and complex numbers have in the meantime been implemented in \isabelle{}. These shall now be adopted for \isac{}.
  19.419 +
  19.420 +\paragraph{Improve the efficiency of \isac{}'s rewrite engine.}
  19.421 +\isac's initial design stressed the importance of computations closely resembling work by paper and pencil. Major parts of computation happen by rewriting, so a large number of rewrite rules need to form groups which as such mimic the step width of human computations.
  19.422 +\isabelle{}'s rewrite engine meets opposite requirements: provide a maximum of automation with large rule sets (``simpsets''). Only recently, frequent user requests motivated projects in the \isabelle{} development to provide readable traces for rewriting.
  19.423 +However, the requirements of \isabelle{} and \isac{} are too different and \isac{} will keep its own rewrite engine. But the mechanism of {\em term nets} \cite{charniak2014artificial} could be adopted to improve \isac's efficiency.
  19.424 +
  19.425 +\paragraph{Adopt \textit{Isabelle/jEdit} for \isac{}.}
  19.426 +\isac's initial design also stressed usability of the front-end for students. At this time \isabelle{}'s front-end was {\em Proof General} \cite{aspinall2000proof}. Thus there was no choice but to develop a custom GUI for \isac{}.
  19.427 +In the meantime \textit{Isabelle/jEdit} has become an appealing prover IDE. However, the requirements are presently quite opposing. \isabelle{} connects one engineer with a multitude of cores while \isac{} connects a multitude of students with one server. \isac's centralised structure has good reasons: groups of students have one lecturer, groups of students are examined together, etc.
  19.428 +During the next years, \isac{}'s front-end will be developed and improved independently from \isabelle{}. Narrowing towards \isabelle{} can start as soon as \textit{Isabelle/jEdit} moves towards collaborative work, implements session management, version management, etc.
  19.429 +
  19.430 +\subsubsection{Summary on \isac's Approach to \isabelle{}}
  19.431 +Above, those points are listed, where \isac's math-engine is supposed to approach \isabelle{} in future development, as this thesis already has begun. Development of \isac's math-engine is separated from the development of \isac's front-end. Section \ref{ssec:isac-archi} describes \isac's {\em architecture {\em separating} mathematics and dialogs}.
  19.432 +Development of the front-end addresses the expertise represented by our {\em University of Applied Sciences} in Hagenberg: human-centered computing, interactive media, mobile computing, secure information systems, software engineering, etc. \isac's development efforts in these disciplines can be planned independently from the above list for several years, until \textit{Isabelle/jEdit} is ready to be adopted for \isac{} in a particularly interesting development effort.
  19.433 +
  19.434 +
  19.435 +\section{Practical Work on \isac{}}
  19.436 +\label{sec:contrib}
  19.437 +This section documents the implementation work that has been carried out as project work in preparation for and along with the writing of this thesis. During the first big step, the breakdown of \isac{}'s theory evaluation caused by an invasive update to \isabelle{}'s mechanisms was fixed (section \ref{ssec:parall-thy}). This made possible the introduction of concurrency to \isac{}'s user session management and thus enabled the parallel computation of independent calculations. This process is described in section \ref{ssec:session-manag}. Section \ref{sec:performance} discusses the effects of this process on \isac{}'s performance. Finally, \isac{}'s project status is summarized in section \ref{sec:proj-stat}.
  19.438 +
  19.439 +
  19.440 +\subsection{Integration With \isabelle{}'s Parallel Theory Evaluation}
  19.441 +\label{ssec:parall-thy}
  19.442 +In older \isabelle{} versions, it was not possible for programmers to append arbitrary data to the datastructure which holds all the theories. {\em Isabelle2005} \cite{isabelle2013news} then introduced a concept called {\em theory data}, which allowed developers to store user-defined datastructures in a theory's context from \textit{ML} code embedded in that theory.
  19.443 +
  19.444 +\paragraph{Data flow.} In order to understand the following paragraphs, we need to know how \isabelle{} computes theories and what the theory data flow looks like. Section \ref{sec:isaisar} already mentioned, that the inheritance structure of \isabelle{}'s theories forms an acyclic, directed graph. Computation begins at the root theories which are determined backwards from the current working theory. The resulting data is then available in all descendant theories. Whenever a theory has more than one ancestor, the derived data is merged.
  19.445 +
  19.446 +\bigskip
  19.447 +
  19.448 +The last sections have outlined the conceptual and architectural differences between \isabelle{} and \isac{}. \isac{} needs to manage application-oriented and algorithmic knowledge which originally could not be integrated with the deductive structure of \isabelle{}'s theories. Therefore these data were stored in raw ML references. {\em Isabelle2009} then introduced parallelism. Now the computation order of the theories was not deterministic anymore and \isac{} could not ensure that its \textit{ML} references were accessed and updated in the same order as previously. This caused certain parts of the system to show faulty behavior in some cases. {\em Isabelle2009-1} then added the wrapper structure {\tt Unsychronized.ref} to denote that these references are not synchronized with the parallel environment managed by \isabelle{}.
  19.449 +While the temporary deactivation of faulty modules and certain work\-arounds fixed most problems, the parallelization of \isac{}'s user session management required that most relevant data be properly managed by \isabelle{}. Therefore the preparatory step for the parallelization was the integration of the unsynchronized references in question with \isabelle{} by means of the functor {\tt Theory\_Data} \cite{wenzel2013impl}.
  19.450 +It is important to mention that the \isabelle{} implementation manual \cite{wenzel2013impl} warns to be careful about using too many datastructures on the basis of {\tt Theory\_Data} because they are newly initialized for every single theory that derives from the respective module that declared the datastructures. Thus, space is reserved which can cause a significantly increased memory footprint. Most of the overhead, however, occurs when theories are loaded dynamically. When working with a compiled \isac{} system the overhead should be reasonable.
  19.451 +
  19.452 +The workflow was implemented for several central elements and broken down into five isolated steps to conform with \isabelle{}'s minimal changeset development and documentation model (section \ref{sec:isadevdoc}):
  19.453 +\begin{enumerate}
  19.454 +  \item\label{enum:isolate} {\bf Isolate access.} In this step, read accesses were centralized by wrapping references in an access function and replacing all occurences with a call to the function:
  19.455 +\imlcode{~~\=val foo = Unsychronized.ref [1,2,3]; (*declaration*)\\
  19.456 +\>fun get\_foo () = !foo; (*new*)\\
  19.457 +\>fun add x y = x + y;\\
  19.458 +\>fun foo\_sum () = fold add (!foo) 0; (*obsolete*)\\
  19.459 +\>fun foo\_sum () = fold add (get\_foo ()) 0; (*new*)}
  19.460 +
  19.461 +  \item\label{enum:addthydata} {\bf Add {\tt Theory\_Data} access functions} in parallel to the existing ones. This includes a new version of the getter function ({\tt get\_foo} in pt. \ref{enum:isolate}). But first we need to define a datastructure that \isabelle{} can manage. Thus, we need to implement the functor \texttt{Theory\_Data} for our specific datatype\footnote{see appendix \ref{app:merge-lists} for definitions of {\tt merge\_lists(')}}:
  19.462 +\imlcode{\label{alg:thydata-functor}
  19.463 +~~\=st\=ructure Foo\_Data = Theory\_Data (\\
  19.464 +\>\>type T = int list;\\
  19.465 +\>\>val empty = [];\\
  19.466 +\>\>val extend = I; (*identity*)\\
  19.467 +\>\>val merge = merge\_lists;\\
  19.468 +\>);}
  19.469 +\noindent
  19.470 +{\tt T} is the type of the data. {\tt empty} is the initial value for every theory that does not define this particular data slot content. {\tt extend} is used for reinitialization on import of a theory and can be understood as a unitary {\tt merge}. Finally, {\tt merge} declares, how the data slot contents of two theories are to be joined. Unlike the operations that we are replacing during this process, the two simple access functions
  19.471 +\imlcode{~~\=val get\_foo' = Foo\_Data.get;\\
  19.472 +\>fun put\_foo' xs = Foo\_Data.map (merge\_lists' xs);}
  19.473 +\noindent
  19.474 +additionally require the target theory as input parameter. Because \textit{ML} code is always embedded in \isabelle{} theories, they can operate on any available theory according to the dependency graph. Each theory stores its own version of the data slot. A new version of {\tt get\_foo},
  19.475 +\imlcode{~~fun get\_foo () = get\_foo' @\{theory\}; \textrm{,}}
  19.476 +which is not yet in use during this phase, could replace \texttt{get\_foo} from pt. \ref{enum:isolate}. This definition uses antiquotations (section \ref{sec:antiquot}, page \pageref{sec:antiquot}). This means that {\tt @\{theory\}} always refers to the theory where the function definition is located and is thus resolved at compile time. The alternative approach
  19.477 +\imlcode{~~fun get\_foo () = get\_ref\_thy () |> get\_foo';}
  19.478 +\noindent
  19.479 +is more flexible in that {\tt get\_foo'} operates on a reference theory set by the programmer (details on {\tt get\_ref\_thy} are explained in the subsequent section) and passes it to {\tt get\_foo'} from above. The last aspect of this step is the addition of {\bf setup} blocks \cite{wenzel2013isar} where previously the raw references had been updated. These blocks must contain one \textit{ML} expression which represents a mapping between two {\tt theory}s. As a consequence, \isabelle{} updates the current theory context with the function's result in these locations, e.g.
  19.480 +\imlcode{~~setup \{* put\_foo' [4,5,6] *\} \textit{.}}
  19.481 +
  19.482 +  \item\label{enum:checkdiff} {\bf Check differences.} Now that both, the old unsychronized reference and the new stored as theory data, are created and updated, we can compare their contents and make sure that the last steps were successful. Due to the nature of the datastructures and the fact that the new method caused a different ordering to some of the elements, the most convenient solution was to compute a string representation from both results, write them into documents and compare the documents using a file comparison utility. If any errors occurred, step \ref{enum:isolate} and \ref{enum:addthydata} required revision.
  19.483 +
  19.484 +  \item\label{enum:shiftthydata} {\bf Shift to theory data.} Optimally, this step merely meant exchanging the old definition of {\tt get\_foo} (pt. \ref{enum:isolate}) for the new one (pt. \ref{enum:addthydata}).
  19.485 +
  19.486 +  \item\label{enum:removecode} {\bf Cleanup.} Finally, we can remove all code concerned with the unsynchronized reference.
  19.487 +
  19.488 +\end{enumerate}
  19.489 +
  19.490 +Besides acquiring the necessary knowledge on how to store and access arbitrary datastructures in a theory's context, the biggest challenges included understanding and merging the different kinds of custom datastructures and keeping the solutions simple; optimally simpler than the previous code. Also, I had to adjust the theory hierarchy and add new theories in order to keep a clean structure and ensure that the availability and computation of datastructures was sound and behaved as it had previously. Some of the data dependencies had not been reflected in the dependency graph but had rather been fulfilled by a fortunate execution order.
  19.491 +
  19.492 +As a result of this contribution, most of the stateful compontents in \isac{}, which had been necessary to circumvent mismatches between \isac{}'s and \isabelle{}'s architectures, were eradicated from the system. Those which are still not synchronized are currently being replaced. However, they are not accessed by theories pontentially executed in parallel and therefore do not interfere with \isac{}'s parallel user session management.
  19.493 +
  19.494 +% calcelems.sml:839
  19.495 +% merge tree into another tree (not bidirectional!)
  19.496 +%(* this function only works wrt. the way Isabelle evaluates Theories and is not a general merge
  19.497 +%  function for trees / ptyps *)
  19.498 +%fun merge_ptyps ([], pt) = pt
  19.499 +%  | merge_ptyps (pt, []) = pt
  19.500 +%  | merge_ptyps ((x' as Ptyp (k, x, ps)) :: xs, (xs' as Ptyp (k', y, ps') :: ys)) =
  19.501 +%      if k = k'
  19.502 +%      then Ptyp (k, y, merge_ptyps (ps, ps')) :: merge_ptyps (xs, ys)
  19.503 +%      else x' :: merge_ptyps (xs, xs');
  19.504 +
  19.505 +%fun merge_tree 
  19.506 +
  19.507 +% challenges:
  19.508 +%   ensure that required functions are available wherever used/needed (change inheritance structure, introduce new theories)
  19.509 +%   don't add to complexity
  19.510 +%   datastructures different (lists, trees)
  19.511 +
  19.512 +%DONE:
  19.513 +%> Offensichtlich sind die Umstände durch die unterschiedlichen Architekturen und Requirements der beiden Systeme und der ständigen Erweiterung Isabelles entstanden. 
  19.514 +%Probleme sind durch die Weiterentwicklung von Isabelle entstanden, unterschiedliche Architekturen und Requirements spielen hier keine Rolle:
  19.515 +%ISAC's "stateful components" haben aufgrund von Isabelle's Parallelisierung "non-deterministic behaviour" bekommen. In ISAC betraf dies Unsynchronized.ref.
  19.516 +%> Meine momentane Arbeit ist notwendig, um dann ISAC zu parallelisieren, hat aber forschungstechnisch weniger mit der Parallelisierung funktionaler Programme zu tun, sondern mit Software Engineering Prozessen speziell in Standard ML, da es mutable Data erlaubt - oder?
  19.517 +%"mutable data", die "Unsynchronized.ref" waren notwendig, um der "deductive structure of Isabelle theories" zu entkommen und "Application-oriented knowledge" sowie "Algorithmic knowledge" hinzuzufügen.
  19.518 +%Hier ging es also um "Architectural Design", das nur so unschön zu implementieren war.
  19.519 +%Die bisherige Arbeit war notwendig, weil ISAC mit den "stateful components" auf dem parallelsierten Isabelle schlichtweg nicht mehr funktionierte.
  19.520 +%> Bei einer Implementierung mittels einer rein funktionalen Sprache hätte es diese Probleme maximal in anderer Form gegegeben. 
  19.521 +%Genau.
  19.522 +
  19.523 +%DONE:
  19.524 +%>> The init operation is supposed to produce a pure value from the given back-
  19.525 +%>> ground theory and should be somehow “immediate”. Whenever a proof con-
  19.526 +%>> text is initialized, which happens frequently, the the system invokes the init
  19.527 +%>> operation of all theory data slots ever declared. This also means that one
  19.528 +%>> needs to be economic about the total number of proof data declarations in
  19.529 +%>> the system, i.e. each ML module should declare at most one, sometimes two
  19.530 +%>> data slots for its internal use. Repeated data declarations to simulate a
  19.531 +%>> record type should be avoided!
  19.532 +%> (S. 46)
  19.533 +%>
  19.534 +%> Ich weiß nicht, ob das eh vernachlässigbar ist.
  19.535 +%Nicht vernachl"ssigen --- sondern in 4.2.4 erkl"aren:
  19.536 +%# ISAC erzeugt eine gewaltige Menge von Theory_Data
  19.537 +%# Theory_Data werden von Theory zu Theory 'vererbt' --- das erzeugt Overhead, wenn Theories dynamisch %geladen werden
  19.538 +%# ISAC braucht aber kein dynamisches Laden von Theories --- alle Theories sind in einer Session zusammengefasst, die 1-mal beim Hochfahren geladen wird. 
  19.539 +
  19.540 +
  19.541 +\subsection{Introduction of Concurrent User Session Management}
  19.542 +\label{ssec:session-manag}
  19.543 +\isabelle{}'s programming conventions prohibit user programs based on \isabelle{} from forking their own \textit{ML} threads. This is necessary because \isabelle{} has its own managed environment for concurrency which takes care of thread synchronization and shared resources. Accesses from user threads could easily break this well-tested framework. After careful analysis of the \isabelle{} documentation and the code in some concurrency related \textit{Isabelle/ML} modules, our suspicion that parallelizing \isac{}'s user session management required breaking this convention was rendered unfounded. As it turned out, \isabelle{}'s implementation of {\em future}s (section \ref{sec:isafut}) provided everything we needed. As a result, the actual implementation effort was marginal. Since \isac{}'s does not maintain any problematic stateful resources anymore, there was no further effort required for synchronizing shared resources beyond those managed by \isabelle{}, except for the {\tt states} element, which maintains a state for each active calculation. Instead of an {\tt Unsynchronized.ref}, it was replaced by a {\tt Synchronized.var} (see section \ref{sec:guarded-access}, page \pageref{sec:guarded-access}).
  19.544 +
  19.545 +The function {\tt appendFormula}, which derives the input formula from a calculation's state, had to be transformed in a way, such that it can be passed to {\tt Future.fork} which then takes care of the parallel execution.
  19.546 +\imlcode{~~fun ap\=pendFormula params =\\
  19.547 +\>Future.fork (fn () => appendFormula' params);}
  19.548 +\noindent
  19.549 +is a simplified version of the key line in this transition, with {\tt appendFormula'} being the original version of the function. The same transformation was applied to {\tt autoCalculate}, which can compute calculations step by step as well as automatically determine the final result. Many of the computations within {\tt autoCalculate} require access to the current proof context datastructure. During the previous part of the project (section \ref{ssec:parall-thy}), this was done using the function {\tt the\_generic\_context} from \isabelle{}'s {\em ML\_Context} module. When I called more than one instance of {\tt autoCalculate} simultaneously, it turned out that functions executed within a future cannot acquire access to the current proof context through this function. In practice, the context data of the theory {\em Isac} is sufficient for all calculations and could be completed and made available at compile time. However, the \isac{} test suite extends the context data, which cannot be modified at run time. In order to overcome this, there is now a {\tt Synchronized.var} at an appropriate location within the dependency graph, which determines the reference theory for all operations accessing the newly introduced {\tt Theory\_Data} elements. The test suite uses a new function {\tt set\_ref\_thy} to override the standard value which is the theory {\em Isac}.
  19.550 +
  19.551 +Now that all calls to {\tt the\_generic\_context} have been eliminated, invocations of {\tt autoCalculate} and {\tt appendFormula} return immediately with a future value and thus the respective operations are executed in the background.
  19.552 +
  19.553 +
  19.554 +\subsection{Performance}
  19.555 +\label{sec:performance}
  19.556 +The main objective of my project was to improve the speed with which calculations are performed by the system when multiple user front-ends invoke the \isac{} mathematics-engine simultaneously.
  19.557 +
  19.558 +The change in memory usage is not our foremost concern and it can only increase linearly. The number of concurrent calculations depends on \isabelle{}'s thread pool size. Pending invocations of the mathematics-engine are problem descriptions whose memory footprint is negligible, considering a reasonable number of clients working on the same \isac{} server. One concern is the keeping of state data on a per user basis which could accumulate and pose a challenge to the server's working memory under heavy load.
  19.559 +
  19.560 +Temporal gains were measured on a 64-bit {\em Arch Linux} system running on a {\em Lenovo ThinkPad X230} with 4GB RAM and an {\em Intel\textregistered{} Core\texttrademark{} i5-3320M CPU} with two physical 2.60GHz cores and one additional logical core respectively (see {\em simultaneous multithreading} in section \ref{sec:multicoreproc}). The performance test file can be found on the attached DVD (see appendix \ref{app:dvd-proj}). It uses the most CPU-intensive problem instance currently available, whose calculation takes about 1.58 seconds on average. Various numbers of parallel invocations were issued and the time until all of them were finished was measured. Per test there were 20 repetitions and an average taken. All the samples are shown in fig. \ref{fig:autocalc}.
  19.561 +
  19.562 +\def\thinkpaddata{data/thinkpad.dat}
  19.563 +\def\thinkpadavg{data/thinkpadavg.dat}
  19.564 +\def\thinkpaddataseq{data/thinkpad_seq.dat}
  19.565 +\def\thinkpadavgseq{data/thinkpadavg_seq.dat}
  19.566 +
  19.567 +\pgfplotsset{
  19.568 +    colormap={spectral}{[5pt]
  19.569 +        color(0pt)=(color1);
  19.570 +        color(5pt)=(color2);
  19.571 +    }
  19.572 +}
  19.573 +
  19.574 +\begin{figure}
  19.575 +\centering
  19.576 +\begin{tikzpicture}
  19.577 +  \begin{axis}[
  19.578 +    xlabel=number of calculations $n$,
  19.579 +    ylabel=time $t$ in ms,
  19.580 +    ylabel style={at={(-0.23,0.5)}},
  19.581 +    xmin=0,
  19.582 +    xmax=15.5,
  19.583 +    xticklabel style={/pgf/number format/.cd,1000 sep={,}},
  19.584 +    xticklabel=\pgfmathparse{\tick}\pgfmathprintnumber{\pgfmathresult},
  19.585 +    ymin=0,
  19.586 +    ymax=24000,
  19.587 +    scaled y ticks = false,
  19.588 +    yticklabel style={/pgf/number format/.cd,fixed,1000 sep={,}},
  19.589 +    yticklabel=\pgfmathparse{\tick}\pgfmathprintnumber{\pgfmathresult},
  19.590 +    grid=major,
  19.591 +    legend entries={seq. samples, seq. mean, par. samples, par. mean},
  19.592 +    legend style={draw=none,at={(0.49,0.92)}},
  19.593 +    axis lines=left
  19.594 +  ]
  19.595 +    \addplot[scatter, mark=o, only marks, color=color1, point meta=explicit symbolic] table[y=t, x=n,meta index=0] {\thinkpaddataseq};
  19.596 +    \addplot[color=color1] table[y=t, x=n, meta index=0]{\thinkpadavgseq};
  19.597 +    \addplot[scatter, mark=o, only marks, color=color2, point meta=explicit symbolic] table[y=t, x=n,meta index=1] {\thinkpaddata};
  19.598 +    \addplot[color=color2] table[y=t, x=n, meta index=1]{\thinkpadavg};
  19.599 +  \end{axis}
  19.600 +\end{tikzpicture}
  19.601 +
  19.602 +\caption{{\tt autoCalculate} performance comparison.}
  19.603 +\label{fig:autocalc}
  19.604 +\end{figure}
  19.605 +
  19.606 +\subsubsection{Discussion}
  19.607 +From these results it is evident, that two parallel calculations can almost fully exploit both physical processing cores. The slight increase of processing time can be explained by a parallelization overhead caused by \isabelle{}'s future management. Another factor are the synchronized accesses to the {\tt states} datastructure. Since simultanteous multithreading cannot completely simulate additional cores, the increase in runtime between two and three parallel calculations is then sligtly bigger. The same happens between six and seven parallel {\tt autoCaclulate} invocations. As the number of logical cores of the used machine is four, multiples of four show a significant increase in runtime because once all cores are occupied, additional calculations need to wait. Hence, the execution time accurately doubles with the number of calculations according to the number of cores. The measured runtime of the new {\tt autoCalculate} version was only $38.7\%$ of the previous, sequential implementation with eight parallel calculations, $39.8\%$ with ten and $39.7\%$ with twelve. This is a very satisfying outcome and we trust that it will help to significantly improve the end user experience and enable \isac{} servers to deal with a high number of clients simultaneously.
  19.608 +
  19.609 +\subsection{Project Status}
  19.610 +\label{sec:proj-stat}
  19.611 +The user session management is now able to utilize several processing cores for independent computations. However, communication between the \textit{ML} and \textit{Java} layers occurs on the basis of plain standard I/O streams, i.e. multiple front-ends using the same math-engine instance need to write to the same input stream on the \textit{ML} side and thus communication is unsynchronized. This means that although the math-engine is now able to deal with calculations for multiple users concurrently in an efficient manner, the communication model does not yet allow for thread-safe communication. Changing this may be subject for a future project. Another area requiring further investigation is the memory footprint of accumulated state data for calculations and how to deal with a very high number of concurrent user sessions in this respect.
  19.612 +
  19.613 +% problems with streams, communication between ml and java, no return value of appendFormula
    20.1 --- a/doc-isac/mlehnfeld/master/thesis/kurzfassung.tex	Thu Jun 26 10:50:27 2014 +0200
    20.2 +++ b/doc-isac/mlehnfeld/master/thesis/kurzfassung.tex	Thu Jun 26 17:19:30 2014 +0200
    20.3 @@ -1,5 +1,9 @@
    20.4  \chapter{Kurzfassung}
    20.5  
    20.6  \begin{german}
    20.7 -Deutscher Abstract...
    20.8 +Multicore-Prozessoren sind im Personal Computing Bereich mittlerweile ein etablierter Standard. Ähnlich der {\em Softwarekrise} um 1970 ist aber die Entwicklung von Software, die diese Hardware effizient nutzen kann, immer noch ein schwieriger, fehleranfälliger Prozess.
    20.9 +
   20.10 +Funktionale Programmiersprachen basieren auf einem Programmierstil ohne Nebeneffekten und sollen daher für diese Art von Aufgaben besonders geeignet sein. Eine Anwendung solcher Sprachen sind computerbasierte Theorembeweiser. Wegen ihrer Begründung in Mathematik und Logik liegt funktionale Programmierung als Paradigma besonders nahe. {\em Isabelle} ist ein Framework für interaktives Beweisen von Theoremen, dessen Backend in {\em Standard ML} entwickelt wurde. Die Implementierungsarbeiten der letzten Jahre haben dazu geführt, dass das System nun die Ressourcen moderner Multicore-Prozessorarchitekturen besonders wirkungsvoll einsetzen kann. Das Mathematiklernsystem \isac{} baut auf {\em Isabelle} auf und war damit bis vor Kurzem das einzige seiner Art, welches auf Theorembeweisertechnologien basiert.
   20.11 +
   20.12 +Diese Arbeit dokumentiert die Einführung von {\em Isabelle}s Parallelitätskonzepten in \isac{} und gibt einen Überblick über dafür relevante Technologien wie Grundlagen der Parallelität, unterschiedliche Ansätze zu Parallelität in funktionaler Programmierung und maschinengestützes Beweisen. Außerdem werden die Architekturen {\em Isabelle}s und \isac{}s vorgestellt, die beide Mechanismen zur Kommunikation mit der {\em Java Virtual Machine} für ihre Frontends beinhalten. Damit werden die Stärken funktionaler und imperativer Programmierung kombiniert und beide Paradigmen jeweils zur Lösung jener Probleme eingesetzt, für die sie besonders angemessen sind. Die Ergebnisse der Parallelisierung sind vielversprechend und werden es ermöglichen, dass eine große Anzahl von Schülern und Studenten gleichzeitig einen schnell reagierenden \isac{} Server für Berechnungen beanspruchen kann.
   20.13  \end{german}
    21.1 --- a/doc-isac/mlehnfeld/master/thesis/literature.bib	Thu Jun 26 10:50:27 2014 +0200
    21.2 +++ b/doc-isac/mlehnfeld/master/thesis/literature.bib	Thu Jun 26 17:19:30 2014 +0200
    21.3 @@ -1,8 +1,14 @@
    21.4 -@misc{moore1965cramming,
    21.5 -  title={Cramming more components onto integrated circuits},
    21.6 -  author={Moore, Gordon E and others},
    21.7 +
    21.8 +@article{moore1965cramming,
    21.9 +  title={{Cramming More Components onto Integrated Circuits}},
   21.10 +  author={Moore, Gordon E},
   21.11    year={1965},
   21.12 -  publisher={McGraw-Hill New York, NY, USA}
   21.13 +  journal={Electronics},
   21.14 +  pages = {114--117},
   21.15 +  month = {apr},
   21.16 +  day = {19},
   21.17 +  number={8},
   21.18 +  volume={38}
   21.19  }
   21.20  
   21.21  @article{sutter2005free,
   21.22 @@ -12,28 +18,37 @@
   21.23    volume={30},
   21.24    number={3},
   21.25    pages={202--210},
   21.26 -  year={2005}
   21.27 +  year={2005},
   21.28 +  url = {http://www.gotw.ca/publications/concurrency-ddj.htm}
   21.29  }
   21.30  
   21.31  @article{sutter2005software,
   21.32 -  title={Software and the concurrency revolution},
   21.33 -  author={Sutter, Herb and Larus, James},
   21.34 -  journal={Queue},
   21.35 -  volume={3},
   21.36 -  number={7},
   21.37 -  pages={54--62},
   21.38 -  year={2005},
   21.39 -  publisher={ACM}
   21.40 +  author    = {Herb Sutter and
   21.41 +               James R. Larus},
   21.42 +  title     = {Software and the concurrency revolution},
   21.43 +  journal   = {ACM Queue},
   21.44 +  volume    = {3},
   21.45 +  number    = {7},
   21.46 +  year      = {2005},
   21.47 +  pages     = {54-62},
   21.48 +  ee        = {http://doi.acm.org/10.1145/1095408.1095421},
   21.49 +  bibsource = {DBLP, http://dblp.uni-trier.de}
   21.50  }
   21.51  
   21.52  @inproceedings{luebke2006gpgpu,
   21.53 -  title={GPGPU: general-purpose computation on graphics hardware},
   21.54 -  author={Luebke, David and Harris, Mark and Govindaraju, Naga and Lefohn, Aaron and Houston, Mike and Owens, John and Segal, Mark and Papakipos, Matthew and Buck, Ian},
   21.55 -  booktitle={Proceedings of the 2006 ACM/IEEE conference on Supercomputing},
   21.56 -  pages={208},
   21.57 -  year={2006},
   21.58 -  organization={ACM}
   21.59 -}
   21.60 + author = {Luebke, David and Harris, Mark and Kr\"{u}ger, Jens and Purcell, Tim and Govindaraju, Naga and Buck, Ian and Woolley, Cliff and Lefohn, Aaron},
   21.61 + title = {GPGPU: General Purpose Computation on Graphics Hardware},
   21.62 + booktitle = {ACM SIGGRAPH 2004 Course Notes},
   21.63 + series = {SIGGRAPH '04},
   21.64 + year = {2004},
   21.65 + venue = {Los Angeles, CA},
   21.66 + articleno = {33},
   21.67 + url = {http://doi.acm.org/10.1145/1103900.1103933},
   21.68 + doi = {10.1145/1103900.1103933},
   21.69 + acmid = {1103933},
   21.70 + publisher = {ACM},
   21.71 + address = {New York, NY, USA},
   21.72 +} 
   21.73  
   21.74  @book{burge1975recursive,
   21.75    title={Recursive programming techniques},
   21.76 @@ -43,16 +58,1706 @@
   21.77  }
   21.78  
   21.79  @book{paulson1994isabelle,
   21.80 -  title={Isabelle: A generic theorem prover},
   21.81 -  author={Paulson, Lawrence C},
   21.82 -  volume={828},
   21.83 -  year={1994},
   21.84 -  publisher={Springer}
   21.85 +  author    = {Lawrence C. Paulson},
   21.86 +  title     = {Isabelle - A Generic Theorem Prover},
   21.87 +  publisher = {Springer-Verlag},
   21.88 +  series    = {Lecture Notes in Computer Science},
   21.89 +  volume    = {828},
   21.90 +  year      = {1994},
   21.91 +  isbn      = {3-540-58244-4},
   21.92 +  ee        = {http://dx.doi.org/10.1007/BFb0030541},
   21.93 +  bibsource = {DBLP, http://dblp.uni-trier.de}
   21.94  }
   21.95  
   21.96  @phdthesis{neuper2001reactive,
   21.97 -  title={Reactive user-guidance by an autonomous engine doing high-school math},
   21.98 -  author={Neuper, Walther A},
   21.99 +  title={Reactive User-Guidance by an Autonomous Engine Doing High-School Math},
  21.100 +  author={Neuper, Walther},
  21.101    year={2001},
  21.102 -  school={PhD thesis, IICM-Inst. f. Softwaretechnology, Technical University, A-8010 Graz, 2001. ftp://ftp. ist. tugraz. at/pub/projects/isac/publ/wn-diss. ps. gz}
  21.103 +  school={Graz University of Technology}
  21.104  }
  21.105 +
  21.106 +@book{hammond2000research,
  21.107 + editor = {Hammond, Kevin and Michelson, Greg},
  21.108 + title = {Research Directions in Parallel Functional Programming},
  21.109 + year = {2000},
  21.110 + isbn = {1852330929},
  21.111 + publisher = {Springer-Verlag},
  21.112 + address = {London, UK, UK},
  21.113 +}
  21.114 +
  21.115 +@article{hudak1989conception,
  21.116 + author = {Hudak, Paul},
  21.117 + title = {Conception, Evolution, and Application of Functional Programming Languages},
  21.118 + journal = {ACM Comput. Surv.},
  21.119 + issue_date = {Sep. 1989},
  21.120 + volume = {21},
  21.121 + number = {3},
  21.122 + month = sep,
  21.123 + year = {1989},
  21.124 + issn = {0360-0300},
  21.125 + pages = {359--411},
  21.126 + numpages = {53},
  21.127 + url = {http://doi.acm.org/10.1145/72551.72554},
  21.128 + doi = {10.1145/72551.72554},
  21.129 + acmid = {72554},
  21.130 + publisher = {ACM},
  21.131 + address = {New York, NY, USA},
  21.132 +}
  21.133 +
  21.134 +@article{church1932set,
  21.135 +  title={A set of postulates for the foundation of logic},
  21.136 +  author={Church, Alonzo},
  21.137 +  journal={Annals of mathematics},
  21.138 +  volume={33},
  21.139 +  number={2},
  21.140 +  pages={346--366},
  21.141 +  year={1932}
  21.142 +}
  21.143 +
  21.144 +@article{church1940formulation,
  21.145 +  author    = {Alonzo Church},
  21.146 +  title     = {A Formulation of the Simple Theory of Types},
  21.147 +  journal   = {Journal of Symbolic Logic},
  21.148 +  volume    = {5},
  21.149 +  number    = {2},
  21.150 +  year      = {1940},
  21.151 +  pages     = {56-68},
  21.152 +  ee        = {http://projecteuclid.org/euclid.jsl/1183387805},
  21.153 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.154 +}
  21.155 +
  21.156 +@article{mccarthy1978history,
  21.157 + author = {McCarthy, John},
  21.158 + title = {History of LISP},
  21.159 + journal = {SIGPLAN Not.},
  21.160 + issue_date = {August 1978},
  21.161 + volume = {13},
  21.162 + number = {8},
  21.163 + month = aug,
  21.164 + year = {1978},
  21.165 + issn = {0362-1340},
  21.166 + pages = {217--223},
  21.167 + numpages = {7},
  21.168 + url = {http://doi.acm.org/10.1145/960118.808387},
  21.169 + doi = {10.1145/960118.808387},
  21.170 + acmid = {808387},
  21.171 + publisher = {ACM},
  21.172 + address = {New York, NY, USA},
  21.173 +} 
  21.174 +
  21.175 +@article{backus1978can,
  21.176 +  author    = {John W. Backus},
  21.177 +  title     = {Can Programming Be Liberated From the von Neumann Style?
  21.178 +               A Functional Style and its Algebra of Programs},
  21.179 +  journal   = {Communications of the ACM},
  21.180 +  volume    = {21},
  21.181 +  number    = {8},
  21.182 +  year      = {1978},
  21.183 +  pages     = {613-641},
  21.184 +  ee        = {http://doi.acm.org/10.1145/359576.359579},
  21.185 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.186 +}
  21.187 +
  21.188 +@article{milner1978theory,
  21.189 +  author    = {Robin Milner},
  21.190 +  title     = {A Theory of Type Polymorphism in Programming},
  21.191 +  journal   = {Journal of Computer and System Sciences},
  21.192 +  volume    = {17},
  21.193 +  number    = {3},
  21.194 +  year      = {1978},
  21.195 +  pages     = {348-375},
  21.196 +  ee        = {http://dx.doi.org/10.1016/0022-0000(78)90014-4},
  21.197 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.198 +}
  21.199 +
  21.200 +@book{milner1997definition,
  21.201 + author = {Milner, Robin and Tofte, Mads and Macqueen, David},
  21.202 + title = {The  Definition of Standard ML},
  21.203 + year = {1997},
  21.204 + isbn = {0262631814},
  21.205 + publisher = {MIT Press},
  21.206 + address = {Cambridge, MA, USA},
  21.207 +} 
  21.208 +
  21.209 +@article{jones2003haskell,
  21.210 +  author = {Simon {Peyton Jones} and others},
  21.211 +  title = {The {Haskell} 98 Language and Libraries: The Revised Report},
  21.212 +  journal = {Journal of Functional Programming},
  21.213 +  volume = 13,
  21.214 +  number = 1,
  21.215 +  pages = {0--255},
  21.216 +  month = {Jan},
  21.217 +  year = 2003,
  21.218 +  url = {http://www.haskell.org/definition/},
  21.219 +}
  21.220 +
  21.221 +@online{haskell2013functional,
  21.222 +  title={Functional programming - HaskellWiki},
  21.223 +  url={http://www.haskell.org/haskellwiki/Functional_programming},
  21.224 +  year={2013}
  21.225 +}
  21.226 +
  21.227 +@inproceedings{peyton1993imperative,
  21.228 +  author    = {Simon L. Peyton Jones and
  21.229 +               Philip Wadler},
  21.230 +  title     = {Imperative Functional Programming},
  21.231 +  booktitle = {POPL},
  21.232 +  year      = {1993},
  21.233 +  pages     = {71-84},
  21.234 +  ee        = {http://doi.acm.org/10.1145/158511.158524},
  21.235 +  crossref  = {DBLP:conf/popl/1993},
  21.236 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.237 +}
  21.238 +
  21.239 +@article{achten1995ins,
  21.240 +  author    = {Peter Achten and
  21.241 +               Marinus J. Plasmeijer},
  21.242 +  title     = {The Ins and Outs of Clean I/O},
  21.243 +  journal   = {Journal of Functional Programming},
  21.244 +  volume    = {5},
  21.245 +  number    = {1},
  21.246 +  year      = {1995},
  21.247 +  pages     = {81-110},
  21.248 +  ee        = {http://dx.doi.org/10.1017/S0956796800001258},
  21.249 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.250 +}
  21.251 +
  21.252 +@inproceedings{wadler1990linear,
  21.253 +  AUTHOR = {P. Wadler},
  21.254 +  TITLE = {Linear types can change the world!},
  21.255 +  BOOKTITLE = {{IFIP {TC} 2} Working Conference on Programming Concepts and Methods},
  21.256 +  WHERE = {{Sea of Galilee, Israel}},
  21.257 +  PUBLISHER = {North Holland},
  21.258 +  EDITOR = {M. Broy and C. Jones},
  21.259 +  PAGES = {347--359},
  21.260 +  YEAR = {1990}
  21.261 +}
  21.262 +
  21.263 +@inproceedings{launchbury1993natural,
  21.264 + author = {Launchbury, John},
  21.265 + title = {A Natural Semantics for Lazy Evaluation},
  21.266 + booktitle = {Proceedings of the 20th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages},
  21.267 + series = {POPL '93},
  21.268 + year = {1993},
  21.269 + isbn = {0-89791-560-7},
  21.270 + venue = {Charleston, South Carolina, USA},
  21.271 + pages = {144--154},
  21.272 + numpages = {11},
  21.273 + url = {http://doi.acm.org/10.1145/158511.158618},
  21.274 + doi = {10.1145/158511.158618},
  21.275 + acmid = {158618},
  21.276 + publisher = {ACM},
  21.277 + address = {New York, NY, USA},
  21.278 +}
  21.279 +
  21.280 +@phdthesis{lippmeier2009type,
  21.281 +  title={Type Inference and Optimisation for an Impure World},
  21.282 +  author={Lippmeier, Ben},
  21.283 +  year={2009},
  21.284 +  school={Australian National University}
  21.285 +}
  21.286 +
  21.287 +@inproceedings{amdahl1967validity,
  21.288 + author = {Amdahl, Gene M.},
  21.289 + title = {Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities},
  21.290 + booktitle = {Proceedings of the April 18-20, 1967, Spring Joint Computer Conference},
  21.291 + series = {AFIPS '67 (Spring)},
  21.292 + year = {1967},
  21.293 + venue = {Atlantic City, New Jersey},
  21.294 + pages = {483--485},
  21.295 + numpages = {3},
  21.296 + url = {http://doi.acm.org/10.1145/1465482.1465560},
  21.297 + doi = {10.1145/1465482.1465560},
  21.298 + acmid = {1465560},
  21.299 + publisher = {ACM},
  21.300 + address = {New York, NY, USA},
  21.301 +}
  21.302 +
  21.303 +@article{gustafson1988reevaluating,
  21.304 +  author    = {John L. Gustafson},
  21.305 +  title     = {Reevaluating Amdahl's Law},
  21.306 +  journal   = {Communications of the ACM},
  21.307 +  volume    = {31},
  21.308 +  number    = {5},
  21.309 +  year      = {1988},
  21.310 +  pages     = {532-533},
  21.311 +  ee        = {http://doi.acm.org/10.1145/42411.42415},
  21.312 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.313 +}
  21.314 +
  21.315 +@article{moler1986matrix,
  21.316 +  title={Matrix computation on distributed memory multiprocessors},
  21.317 +  author={Moler, Cleve},
  21.318 +  journal={Hypercube Multiprocessors},
  21.319 +  volume={86},
  21.320 +  pages={181--195},
  21.321 +  year={1986}
  21.322 +}
  21.323 +
  21.324 +@article{geer2005chip,
  21.325 +  author    = {David Geer},
  21.326 +  title     = {Industry Trends: Chip Makers Turn to Multicore Processors},
  21.327 +  journal   = {IEEE Computer},
  21.328 +  volume    = {38},
  21.329 +  number    = {5},
  21.330 +  year      = {2005},
  21.331 +  pages     = {11-13},
  21.332 +  ee        = {http://doi.ieeecomputersociety.org/10.1109/MC.2005.160},
  21.333 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.334 +}
  21.335 +
  21.336 +@book{culler1999parallel,
  21.337 +  author    = {David E. Culler and
  21.338 +               Jaswinder Pal Singh and
  21.339 +               Anoop Gupta},
  21.340 +  title     = {Parallel computer architecture - a hardware / software approach},
  21.341 +  publisher = {Morgan Kaufmann},
  21.342 +  year      = {1999},
  21.343 +  isbn      = {978-1-55860-343-1},
  21.344 +  pages     = {I-XXIX, 1-1024},
  21.345 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.346 +}
  21.347 +
  21.348 +@book{hennessy2012computer,
  21.349 +  author    = {John L. Hennessy and
  21.350 +               David A. Patterson},
  21.351 +  title     = {Computer Architecture - A Quantitative Approach (5. ed.)},
  21.352 +  publisher = {Morgan Kaufmann},
  21.353 +  year      = {2012},
  21.354 +  isbn      = {978-0-12-383872-8},
  21.355 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.356 +}
  21.357 +
  21.358 +@inproceedings{tullsen1995simultaneous,
  21.359 + author = {Tullsen, Dean M. and Eggers, Susan J. and Levy, Henry M.},
  21.360 + title = {Simultaneous Multithreading: Maximizing On-chip Parallelism},
  21.361 + booktitle = {Proceedings of the 22nd Annual International Symposium on Computer Architecture},
  21.362 + series = {ISCA '95},
  21.363 + year = {1995},
  21.364 + isbn = {0-89791-698-0},
  21.365 + venue = {S. Margherita Ligure, Italy},
  21.366 + pages = {392--403},
  21.367 + numpages = {12},
  21.368 + url = {http://doi.acm.org/10.1145/223982.224449},
  21.369 + doi = {10.1145/223982.224449},
  21.370 + acmid = {224449},
  21.371 + publisher = {ACM},
  21.372 + address = {New York, NY, USA},
  21.373 +}
  21.374 +
  21.375 +@article{moore2008multicore,
  21.376 +  title={Multicore is bad news for supercomputers},
  21.377 +  author={Moore, Samuel K},
  21.378 +  journal={Spectrum, IEEE},
  21.379 +  volume={45},
  21.380 +  number={11},
  21.381 +  pages={15},
  21.382 +  year={2008},
  21.383 +  publisher={IEEE},
  21.384 +  month={nov}
  21.385 +}
  21.386 +
  21.387 +@inproceedings{li2007efficient,
  21.388 + author = {Li, Tong and Baumberger, Dan and Koufaty, David A. and Hahn, Scott},
  21.389 + title = {Efficient Operating System Scheduling for Performance-Asymmetric Multi-Core Architectures},
  21.390 + booktitle = {Proceedings of the 2007 ACM/IEEE Conference on Supercomputing},
  21.391 + series = {SC '07},
  21.392 + year = {2007},
  21.393 + isbn = {978-1-59593-764-3},
  21.394 + venue = {Reno, Nevada},
  21.395 + pages = {53:1--53:11},
  21.396 + articleno = {53},
  21.397 + numpages = {11},
  21.398 + url = {http://doi.acm.org/10.1145/1362622.1362694},
  21.399 + doi = {10.1145/1362622.1362694},
  21.400 + acmid = {1362694},
  21.401 + publisher = {ACM},
  21.402 + address = {New York, NY, USA},
  21.403 +}
  21.404 +
  21.405 +@article{henderson1986functional,
  21.406 +  author    = {Peter B. Henderson},
  21.407 +  title     = {Functional Programming, Formal Specification, and Rapid
  21.408 +               Prototyping},
  21.409 +  journal   = {IEEE Transactions on Software Engineering},
  21.410 +  volume    = {12},
  21.411 +  number    = {2},
  21.412 +  year      = {1986},
  21.413 +  pages     = {241-250},
  21.414 +  ee        = {http://doi.ieeecomputersociety.org/10.1109/TSE.1986.6312939},
  21.415 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.416 +}
  21.417 +
  21.418 +@inproceedings{harris2007feedback,
  21.419 + author = {Harris, Tim and Singh, Satnam},
  21.420 + title = {Feedback Directed Implicit Parallelism},
  21.421 + booktitle = {Proceedings of the 12th ACM SIGPLAN International Conference on Functional Programming},
  21.422 + series = {ICFP '07},
  21.423 + year = {2007},
  21.424 + isbn = {978-1-59593-815-2},
  21.425 + venue = {Freiburg, Germany},
  21.426 + pages = {251--264},
  21.427 + numpages = {14},
  21.428 + url = {http://doi.acm.org/10.1145/1291151.1291192},
  21.429 + doi = {10.1145/1291151.1291192},
  21.430 + acmid = {1291192},
  21.431 + publisher = {ACM},
  21.432 + address = {New York, NY, USA},
  21.433 + keywords = {functional programming, haskell, implicit parallelism},
  21.434 +}
  21.435 +
  21.436 +@article{cantrill2008real,
  21.437 +  author    = {Bryan Cantrill and
  21.438 +               Jeff Bonwick},
  21.439 +  title     = {Real-World Concurrency},
  21.440 +  journal   = {ACM Queue},
  21.441 +  volume    = {6},
  21.442 +  number    = {5},
  21.443 +  year      = {2008},
  21.444 +  pages     = {16-25},
  21.445 +  ee        = {http://doi.acm.org/10.1145/1454456.1454462},
  21.446 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.447 +}
  21.448 +
  21.449 +@inproceedings{thompson2005refactoring,
  21.450 + author = {Thompson, Simon},
  21.451 + title = {Refactoring Functional Programs},
  21.452 + booktitle = {Proceedings of the 5th International Conference on Advanced Functional Programming},
  21.453 + series = {AFP'04},
  21.454 + year = {2005},
  21.455 + isbn = {3-540-28540-7, 978-3-540-28540-3},
  21.456 + venue = {Tartu, Estonia},
  21.457 + pages = {331--357},
  21.458 + numpages = {27},
  21.459 + url = {http://dx.doi.org/10.1007/11546382_9},
  21.460 + doi = {10.1007/11546382_9},
  21.461 + acmid = {2162147},
  21.462 + publisher = {Springer-Verlag},
  21.463 + address = {Berlin, Heidelberg},
  21.464 +}
  21.465 +
  21.466 +@inproceedings{hammer2007proposal,
  21.467 + author = {Hammer, Matthew and Acar, Umut A. and Rajagopalan, Mohan and Ghuloum, Anwar},
  21.468 + title = {A Proposal for Parallel Self-adjusting Computation},
  21.469 + booktitle = {Proceedings of the 2007 Workshop on Declarative Aspects of Multicore Programming},
  21.470 + series = {DAMP '07},
  21.471 + year = {2007},
  21.472 + isbn = {978-1-59593-690-5},
  21.473 + venue = {Nice, France},
  21.474 + pages = {3--9},
  21.475 + numpages = {7},
  21.476 + url = {http://doi.acm.org/10.1145/1248648.1248651},
  21.477 + doi = {10.1145/1248648.1248651},
  21.478 + acmid = {1248651},
  21.479 + publisher = {ACM},
  21.480 + address = {New York, NY, USA},
  21.481 + keywords = {change propagation, dynamic dependency graphs, fork-join programs, parallel change propagation, parallelism, self-adjusting computation, series-parallel programs},
  21.482 +}
  21.483 +
  21.484 +@inproceedings{nocker1991concurrent,
  21.485 + author = {N\"{o}cker, E. G. J. M. H. and Smesters, J. E. W. and van Eekelen, M. C. J. D. and Plasmeijer, M. J.},
  21.486 + title = {Concurrent Clean},
  21.487 + booktitle = {Proceedings on Parallel Architectures and Languages Europe: Volume II: Parallel Languages},
  21.488 + series = {PARLE '91},
  21.489 + year = {1991},
  21.490 + isbn = {0-387-54152-7},
  21.491 + venue = {Eindhoven, The Netherlands},
  21.492 + pages = {202--219},
  21.493 + numpages = {18},
  21.494 + url = {http://dl.acm.org/citation.cfm?id=111634.111646},
  21.495 + acmid = {111646},
  21.496 + publisher = {Springer-Verlag New York, Inc.},
  21.497 + address = {New York, NY, USA},
  21.498 +}
  21.499 +
  21.500 +@inproceedings{marlow2009runtime,
  21.501 + author = {Marlow, Simon and Peyton Jones, Simon and Singh, Satnam},
  21.502 + title = {Runtime Support for Multicore Haskell},
  21.503 + booktitle = {Proceedings of the 14th ACM SIGPLAN International Conference on Functional Programming},
  21.504 + series = {ICFP '09},
  21.505 + year = {2009},
  21.506 + isbn = {978-1-60558-332-7},
  21.507 + venue = {Edinburgh, Scotland},
  21.508 + pages = {65--78},
  21.509 + numpages = {14},
  21.510 + url = {http://doi.acm.org/10.1145/1596550.1596563},
  21.511 + doi = {10.1145/1596550.1596563},
  21.512 + acmid = {1596563},
  21.513 + publisher = {ACM},
  21.514 + address = {New York, NY, USA},
  21.515 + keywords = {Haskell parallel runtime},
  21.516 +}
  21.517 +
  21.518 +@article{nickolls2010gpu,
  21.519 +  author    = {John Nickolls and
  21.520 +               William J. Dally},
  21.521 +  title     = {The GPU Computing Era},
  21.522 +  journal   = {IEEE Micro},
  21.523 +  volume    = {30},
  21.524 +  number    = {2},
  21.525 +  year      = {2010},
  21.526 +  pages     = {56-69},
  21.527 +  ee        = {http://doi.ieeecomputersociety.org/10.1109/MM.2010.41},
  21.528 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.529 +}
  21.530 +
  21.531 +@article{tarditi2006accelerator,
  21.532 + author = {Tarditi, David and Puri, Sidd and Oglesby, Jose},
  21.533 + title = {Accelerator: Using Data Parallelism to Program GPUs for General-purpose Uses},
  21.534 + journal = {SIGOPS Oper. Syst. Rev.},
  21.535 + issue_date = {December 2006},
  21.536 + volume = {40},
  21.537 + number = {5},
  21.538 + month = {oct},
  21.539 + year = {2006},
  21.540 + issn = {0163-5980},
  21.541 + pages = {325--335},
  21.542 + numpages = {11},
  21.543 + doi = {10.1145/1168917.1168898},
  21.544 + publisher = {ACM},
  21.545 +}
  21.546 +
  21.547 +@article{blelloch90compiling,
  21.548 +  author    = {Guy E. Blelloch and
  21.549 +               Gary Sabot},
  21.550 +  title     = {Compiling Collection-Oriented Languages onto Massively Parallel
  21.551 +               Computers},
  21.552 +  journal   = {Journal of Parallel and Distributed Computing},
  21.553 +  volume    = {8},
  21.554 +  number    = {2},
  21.555 +  year      = {1990},
  21.556 +  pages     = {119--134},
  21.557 +  ee        = {http://dx.doi.org/10.1016/0743-7315(90)90087-6},
  21.558 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.559 +}
  21.560 +
  21.561 +@inproceedings{peytonjones2008harnessing,
  21.562 + author = {Peyton Jones, Simon},
  21.563 + title = {Harnessing the Multicores: Nested Data Parallelism in Haskell},
  21.564 + booktitle = {Proceedings of the 6th Asian Symposium on Programming Languages and Systems},
  21.565 + series = {APLAS '08},
  21.566 + year = {2008},
  21.567 + isbn = {978-3-540-89329-5},
  21.568 + venue = {Bangalore, India},
  21.569 + pages = {138--138},
  21.570 + numpages = {1},
  21.571 + url = {http://dx.doi.org/10.1007/978-3-540-89330-1_10},
  21.572 + doi = {10.1007/978-3-540-89330-1_10},
  21.573 + acmid = {1485356},
  21.574 + publisher = {Springer-Verlag},
  21.575 + address = {Berlin, Heidelberg},
  21.576 +}
  21.577 +
  21.578 +@inproceedings{baker1977incremental,
  21.579 + author = {Baker,Jr., Henry C. and Hewitt, Carl},
  21.580 + title = {The Incremental Garbage Collection of Processes},
  21.581 + booktitle = {Proceedings of the 1977 Symposium on Artificial Intelligence and Programming Languages},
  21.582 + year = {1977},
  21.583 + pages = {55--59},
  21.584 + numpages = {5},
  21.585 + url = {http://doi.acm.org/10.1145/800228.806932},
  21.586 + doi = {10.1145/800228.806932},
  21.587 + acmid = {806932},
  21.588 + publisher = {ACM},
  21.589 + address = {New York, NY, USA},
  21.590 + keywords = {Eager evaluation, Garbage collection, Lazy evaluation, Multiprocessing systems, Processor scheduling},
  21.591 +}
  21.592 +
  21.593 +@article{halstead1985multilisp,
  21.594 +  author    = {Robert H. Halstead Jr.},
  21.595 +  title     = {Multilisp: A Language for Concurrent Symbolic Computation},
  21.596 +  journal   = {ACM Transactions on Programming Languages and Systems},
  21.597 +  volume    = {7},
  21.598 +  number    = {4},
  21.599 +  year      = {1985},
  21.600 +  pages     = {501-538},
  21.601 +  ee        = {http://doi.acm.org/10.1145/4472.4478},
  21.602 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.603 +}
  21.604 +
  21.605 +@inproceedings{matthews2010efficient,
  21.606 + author = {Matthews, David C.J. and Wenzel, Makarius},
  21.607 + title = {Efficient Parallel Programming in Poly/ML and Isabelle/ML},
  21.608 + booktitle = {Proceedings of the 5th ACM SIGPLAN Workshop on Declarative Aspects of Multicore Programming},
  21.609 + series = {DAMP '10},
  21.610 + year = {2010},
  21.611 + isbn = {978-1-60558-859-9},
  21.612 + venue = {Madrid, Spain},
  21.613 + pages = {53--62},
  21.614 + numpages = {10},
  21.615 + url = {http://doi.acm.org/10.1145/1708046.1708058},
  21.616 + doi = {10.1145/1708046.1708058},
  21.617 + acmid = {1708058},
  21.618 + publisher = {ACM},
  21.619 + address = {New York, NY, USA},
  21.620 + keywords = {data parallelism, isabelle, parallel standard ml, poly/ml, theorem proving applications},
  21.621 +}
  21.622 +
  21.623 +@article{wegner1990concepts,
  21.624 +  author    = {Peter Wegner},
  21.625 +  title     = {Concepts and paradigms of object-oriented programming},
  21.626 +  journal   = {OOPS Messenger},
  21.627 +  volume    = {1},
  21.628 +  number    = {1},
  21.629 +  year      = {1990},
  21.630 +  pages     = {7-87},
  21.631 +  ee        = {http://doi.acm.org/10.1145/382192.383004},
  21.632 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.633 +}
  21.634 +
  21.635 +@article{shavit1997software,
  21.636 +  title={Software transactional memory},
  21.637 +  author={Shavit, Nir and Touitou, Dan},
  21.638 +  journal={Distributed Computing},
  21.639 +  volume={10},
  21.640 +  number={2},
  21.641 +  pages={99--116},
  21.642 +  year={1997},
  21.643 +  publisher={Springer-Verlag}
  21.644 +}
  21.645 +
  21.646 +@article{herlihy1990linearizability,
  21.647 + author = {Herlihy, Maurice P. and Wing, Jeannette M.},
  21.648 + title = {Linearizability: A Correctness Condition for Concurrent Objects},
  21.649 + journal = {ACM Trans. Program. Lang. Syst.},
  21.650 + issue_date = {July 1990},
  21.651 + volume = {12},
  21.652 + number = {3},
  21.653 + month = jul,
  21.654 + year = {1990},
  21.655 + issn = {0164-0925},
  21.656 + pages = {463--492},
  21.657 + numpages = {30},
  21.658 + url = {http://doi.acm.org/10.1145/78969.78972},
  21.659 + doi = {10.1145/78969.78972},
  21.660 + acmid = {78972},
  21.661 + publisher = {ACM},
  21.662 + address = {New York, NY, USA},
  21.663 +} 
  21.664 +
  21.665 +@inproceedings{ramadan2009committing,
  21.666 + author = {Ramadan, Hany E. and Roy, Indrajit and Herlihy, Maurice and Witchel, Emmett},
  21.667 + title = {Committing Conflicting Transactions in an STM},
  21.668 + booktitle = {Proceedings of the 14th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming},
  21.669 + series = {PPoPP '09},
  21.670 + year = {2009},
  21.671 + isbn = {978-1-60558-397-6},
  21.672 + venue = {Raleigh, NC, USA},
  21.673 + pages = {163--172},
  21.674 + numpages = {10},
  21.675 + url = {http://doi.acm.org/10.1145/1504176.1504201},
  21.676 + doi = {10.1145/1504176.1504201},
  21.677 + acmid = {1504201},
  21.678 + publisher = {ACM},
  21.679 + address = {New York, NY, USA},
  21.680 + keywords = {concurrency control, dependence-aware, serializability, transactional memory},
  21.681 +}
  21.682 +
  21.683 +@inproceedings{herlihy2003software,
  21.684 + author = {Herlihy, Maurice and Luchangco, Victor and Moir, Mark and Scherer,III, William N.},
  21.685 + title = {Software Transactional Memory for Dynamic-sized Data Structures},
  21.686 + booktitle = {Proceedings of the Twenty-second Annual Symposium on Principles of Distributed Computing},
  21.687 + series = {PODC '03},
  21.688 + year = {2003},
  21.689 + isbn = {1-58113-708-7},
  21.690 + venue = {Boston, Massachusetts},
  21.691 + pages = {92--101},
  21.692 + numpages = {10},
  21.693 + url = {http://doi.acm.org/10.1145/872035.872048},
  21.694 + doi = {10.1145/872035.872048},
  21.695 + acmid = {872048},
  21.696 + publisher = {ACM},
  21.697 + address = {New York, NY, USA},
  21.698 +}
  21.699 +
  21.700 +@patent{zhang2010software,
  21.701 +  title={Software Transaction Commit Order and Conflict Management},
  21.702 +  author={Zhang, L. and Grover, V.K. and Magruder, M.M. and Detlefs, D. and Duffy, J.J. and Graefe, G.},
  21.703 +  url={http://www.google.co.in/patents/US7711678},
  21.704 +  year={2010},
  21.705 +  month={5},
  21.706 +  number={US 7711678}
  21.707 +}
  21.708 +
  21.709 +@techreport{ennals2006software,
  21.710 +  title={Software transactional memory should not be obstruction-free},
  21.711 +  author={Ennals, Robert},
  21.712 +  year={2006},
  21.713 +  institution={Technical Report IRC-TR-06-052, Intel Research Cambridge Tech Report}
  21.714 +}
  21.715 +
  21.716 +@techreport{ennals2005efficient,
  21.717 +  author	= {Ennals, Robert},
  21.718 +  title		= {Efficient Software Transactional Memory},
  21.719 +  institution	= {Intel Research Cambridge Tech Report},
  21.720 +  number	= {IRC-TR-05-051},
  21.721 +  pdf		= {http://www.cs.wisc.edu/trans-memory/misc-papers/051_Rob_Ennals.pdf},
  21.722 +  year		= {2005},
  21.723 +  month		= {Jan}
  21.724 +}
  21.725 +
  21.726 +@inproceedings{saha2006mcrt,
  21.727 + author = {Saha, Bratin and Adl-Tabatabai, Ali-Reza and Hudson, Richard L. and Minh, Chi Cao and Hertzberg, Benjamin},
  21.728 + title = {McRT-STM: A High Performance Software Transactional Memory System for a Multi-Core Runtime},
  21.729 + booktitle = {Proceedings of the Eleventh ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming},
  21.730 + series = {PPoPP '06},
  21.731 + year = {2006},
  21.732 + isbn = {1-59593-189-9},
  21.733 + pages = {187--197},
  21.734 + numpages = {11},
  21.735 + doi = {10.1145/1122971.1123001},
  21.736 + publisher = {ACM}
  21.737 +}
  21.738 +
  21.739 +@inproceedings{harris2005composable,
  21.740 + author = {Harris, Tim and Marlow, Simon and Peyton-Jones, Simon and Herlihy, Maurice},
  21.741 + title = {Composable Memory Transactions},
  21.742 + booktitle = {Proceedings of the Tenth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming},
  21.743 + series = {PPoPP '05},
  21.744 + year = {2005},
  21.745 + isbn = {1-59593-080-9},
  21.746 + venue = {Chicago, IL, USA},
  21.747 + pages = {48--60},
  21.748 + numpages = {13},
  21.749 + url = {http://doi.acm.org/10.1145/1065944.1065952},
  21.750 + doi = {10.1145/1065944.1065952},
  21.751 + acmid = {1065952},
  21.752 + publisher = {ACM},
  21.753 + address = {New York, NY, USA},
  21.754 + keywords = {locks, non-blocking algorithms, transactions},
  21.755 +}
  21.756 +
  21.757 +@inproceedings{hewitt1973universal,
  21.758 + author = {Hewitt, Carl and Bishop, Peter and Steiger, Richard},
  21.759 + title = {A Universal Modular ACTOR Formalism for Artificial Intelligence},
  21.760 + booktitle = {Proceedings of the 3rd International Joint Conference on Artificial Intelligence},
  21.761 + series = {IJCAI'73},
  21.762 + year = {1973},
  21.763 + venue = {Stanford, USA},
  21.764 + pages = {235--245},
  21.765 + numpages = {11},
  21.766 + url = {http://dl.acm.org/citation.cfm?id=1624775.1624804},
  21.767 + acmid = {1624804},
  21.768 + publisher = {Morgan Kaufmann Publishers Inc.},
  21.769 + address = {San Francisco, CA, USA},
  21.770 +}
  21.771 +
  21.772 +@article{hewitt1977viewing,
  21.773 +  author    = {Carl Hewitt},
  21.774 +  title     = {Viewing Control Structures as Patterns of Passing Messages},
  21.775 +  journal   = {Artificial Intelligence},
  21.776 +  volume    = {8},
  21.777 +  number    = {3},
  21.778 +  year      = {1977},
  21.779 +  pages     = {323--364},
  21.780 +  ee        = {http://dx.doi.org/10.1016/0004-3702(77)90033-9},
  21.781 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.782 +}
  21.783 +
  21.784 +@article{agha1985actors,
  21.785 +  author    = {Gul A. Agha},
  21.786 +  title     = {ACTORS - a model of concurrent computation in distributed
  21.787 +               systems},
  21.788 +  publisher = {MIT Press},
  21.789 +  series    = {MIT Press series in artificial intelligence},
  21.790 +  year      = {1990},
  21.791 +  isbn      = {978-0-262-01092-4},
  21.792 +  pages     = {I-IX, 1-144},
  21.793 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.794 +}
  21.795 +
  21.796 +@article{agha1997foundation,
  21.797 +  author    = {Gul Agha and
  21.798 +               Ian A. Mason and
  21.799 +               Scott F. Smith and
  21.800 +               Carolyn L. Talcott},
  21.801 +  title     = {A Foundation for Actor Computation},
  21.802 +  journal   = {Journal of Functional Programming},
  21.803 +  volume    = {7},
  21.804 +  number    = {1},
  21.805 +  year      = {1997},
  21.806 +  pages     = {1--72},
  21.807 +  ee        = {http://journals.cambridge.org/action/displayAbstract?aid=44065},
  21.808 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.809 +}
  21.810 +
  21.811 +@misc{thurauakka,
  21.812 +  title={Akka framework},
  21.813 +  author={Thurau, Martin},
  21.814 +  year={2012},
  21.815 +  howpublished={Paper for Seminar Software Systems Engineering, Universität zu Lübeck},
  21.816 +  url={http://media.itm.uni-luebeck.de/teaching/ws2012/sem-sse/martin-thurau-akka.io.pdf}
  21.817 +}
  21.818 +
  21.819 +@inproceedings{karmani2009actor,
  21.820 + author = {Karmani, Rajesh K. and Shali, Amin and Agha, Gul},
  21.821 + title = {Actor Frameworks for the JVM Platform: A Comparative Analysis},
  21.822 + booktitle = {Proceedings of the 7th International Conference on Principles and Practice of Programming in Java},
  21.823 + series = {PPPJ '09},
  21.824 + year = {2009},
  21.825 + isbn = {978-1-60558-598-7},
  21.826 + venue = {Calgary, Alberta, Canada},
  21.827 + pages = {11--20},
  21.828 + numpages = {10},
  21.829 + url = {http://doi.acm.org/10.1145/1596655.1596658},
  21.830 + doi = {10.1145/1596655.1596658},
  21.831 + acmid = {1596658},
  21.832 + publisher = {ACM},
  21.833 + address = {New York, NY, USA},
  21.834 + keywords = {JVM, Java, abstractions, actors, comparison, frameworks, libraries, performance, semantics},
  21.835 +}
  21.836 +
  21.837 +@article{fuggetta1998understanding,
  21.838 +  author    = {Alfonso Fuggetta and
  21.839 +               Gian Pietro Picco and
  21.840 +               Giovanni Vigna},
  21.841 +  title     = {Understanding Code Mobility},
  21.842 +  journal   = {IEEE Transactions on Software Engineering},
  21.843 +  volume    = {24},
  21.844 +  number    = {5},
  21.845 +  year      = {1998},
  21.846 +  pages     = {342-361},
  21.847 +  ee        = {http://doi.ieeecomputersociety.org/10.1109/32.685258},
  21.848 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.849 +}
  21.850 +
  21.851 +@article{panwar1994methodology,
  21.852 +  author    = {Rajendra Panwar and
  21.853 +               Gul Agha},
  21.854 +  title     = {A Methodology for Programming Scalable Architectures},
  21.855 +  journal   = {Journal of Parallel and Distributed Computing},
  21.856 +  volume    = {22},
  21.857 +  number    = {3},
  21.858 +  year      = {1994},
  21.859 +  pages     = {479-487},
  21.860 +  ee        = {http://dx.doi.org/10.1006/jpdc.1994.1105},
  21.861 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.862 +}
  21.863 +
  21.864 +@techreport{odersky2004overview,
  21.865 +  title={An overview of the Scala programming language},
  21.866 +  author={Martin Odersky and Vincent Cremet and Iulian Dragos and Gilles Dubochet and Burak Emir and Sean McDirmid and Stéphane Micheloud and Nikolay Mihaylov and Michel Schinz and Erik Stenman and Lex Spoon and Matthias Zenger},
  21.867 +  year={2006},
  21.868 +  institution={École Polytechnique Fédérale de Lausanne},
  21.869 +  address={1015 Lausanne, Switzerland},
  21.870 +  number={LAMP-REPORT-2006-001},
  21.871 +  note={2nd edition}
  21.872 +}
  21.873 +
  21.874 +@incollection{haller2006event,
  21.875 +  title={Event-based programming without inversion of control},
  21.876 +  author={Haller, Philipp and Odersky, Martin},
  21.877 +  booktitle={Modular Programming Languages},
  21.878 +  pages={4--22},
  21.879 +  year={2006},
  21.880 +  publisher={Springer-Verlag}
  21.881 +}
  21.882 +
  21.883 +@article{haller2009scala,
  21.884 +  title={Scala actors: Unifying thread-based and event-based programming},
  21.885 +  author={Haller, Philipp and Odersky, Martin},
  21.886 +  journal={Theoretical Computer Science},
  21.887 +  volume={410},
  21.888 +  number={2},
  21.889 +  pages={202--220},
  21.890 +  year={2009},
  21.891 +  publisher={Elsevier}
  21.892 +}
  21.893 +
  21.894 +@online {akka2014futures,
  21.895 +  title={Futures (Scala) -- Akka Documentation},
  21.896 +  url={http://doc.akka.io/docs/akka/current/scala/futures.html},
  21.897 +  urldate={2014-06-05},
  21.898 +  year={2014}
  21.899 +}
  21.900 +
  21.901 +@online {scala2014migration,
  21.902 +  title={The Scala Actors Migration Guide - Scala Documentation},
  21.903 +  author={Jovanovic, Vojin and Haller, Philipp},
  21.904 +  url={http://docs.scala-lang.org/overviews/core/actors-migration-guide.html},
  21.905 +  year={2014}
  21.906 +}
  21.907 +
  21.908 +@inproceedings{tasharofi2013scala,
  21.909 + author = {Tasharofi, Samira and Dinges, Peter and Johnson, Ralph E.},
  21.910 + title = {Why Do Scala Developers Mix the Actor Model with Other Concurrency Models?},
  21.911 + booktitle = {Proceedings of the 27th European Conference on Object-Oriented Programming},
  21.912 + series = {ECOOP'13},
  21.913 + year = {2013},
  21.914 + isbn = {978-3-642-39037-1},
  21.915 + venue = {Montpellier, France},
  21.916 + pages = {302--326},
  21.917 + numpages = {25},
  21.918 + url = {http://dx.doi.org/10.1007/978-3-642-39038-8_13},
  21.919 + doi = {10.1007/978-3-642-39038-8_13},
  21.920 + acmid = {2525001},
  21.921 + publisher = {Springer-Verlag},
  21.922 + address = {Berlin, Heidelberg},
  21.923 +}
  21.924 +
  21.925 +@article{hoare1978communicating,
  21.926 +  author    = {C. A. R. Hoare},
  21.927 +  title     = {Communicating Sequential Processes},
  21.928 +  journal   = {Communications of the ACM},
  21.929 +  volume    = {21},
  21.930 +  number    = {8},
  21.931 +  year      = {1978},
  21.932 +  pages     = {666-677},
  21.933 +  ee        = {http://doi.acm.org/10.1145/359576.359585},
  21.934 +  bibsource = {DBLP, http://dblp.uni-trier.de}
  21.935 +}
  21.936 +
  21.937 +@book{INMOS84occam,
  21.938 +  author = {{INMOS, Ltd.}},
  21.939 +  year = {1984},
  21.940 +  address = {Englewood Cliffs},
  21.941 +  publisher = {Prentice Hall},
  21.942 +  title = {The occam programming manual}
  21.943 +}
  21.944 +
  21.945 +@inproceedings{dorward1997programming,
  21.946 + author = {Dorward, Sean and Pike, Rob and Winterbottom, Phil},
  21.947 + title = {Programming in Limbo},
  21.948 + booktitle = {Proceedings of the 42Nd IEEE International Computer Conference},
  21.949 + series = {COMPCON '97},
  21.950 + year = {1997},
  21.951 + isbn = {0-8186-7804-6},
  21.952 + pages = {245--},
  21.953 + url = {http://dl.acm.org/citation.cfm?id=792770.793719},
  21.954 + acmid = {793719},
  21.955 + publisher = {IEEE Computer Society},
  21.956 + address = {Washington, DC, USA},
  21.957 + keywords = {programming languages, modules, graphics},
  21.958 +}
  21.959 +
  21.960 +@inproceedings{pike2012go,
  21.961 +  author    = {Rob Pike},
  21.962 +  title     = {Go at Google},
  21.963 +  booktitle = {Proceedings of the Conference on Systems, Programming, and Applications: Software
  21.964 +               for Humanity},
  21.965 +  year      = {2012},
  21.966 +  pages     = {5-6},
  21.967 +  ee        = {http://doi.acm.org/10.1145/2384716.2384720},
  21.968 +  bibsource = {DBLP, http://dblp.uni-trier.de},
  21.969 +  venue     = {Tucson, AZ, USA},
  21.970 +  editor    = {Gary T. Leavens}
  21.971 +}
  21.972 +
  21.973 +@inproceedings{brown2012paraforming,
  21.974 + author = {Brown, Christopher and Loidl, Hans-Wolfgang and Hammond, Kevin},
  21.975 + title = {ParaForming: Forming Parallel Haskell Programs Using Novel Refactoring Techniques},
  21.976 + booktitle = {Proceedings of the 12th International Conference on Trends in Functional Programming},
  21.977 + series = {TFP'11},
  21.978 + year = {2012},
  21.979 + isbn = {978-3-642-32036-1},
  21.980 + venue = {Madrid, Spain},
  21.981 + pages = {82--97},
  21.982 + numpages = {16},
  21.983 + url = {http://dx.doi.org/10.1007/978-3-642-32037-8_6},
  21.984 + doi = {10.1007/978-3-642-32037-8_6},
  21.985 + acmid = {2362972},
  21.986 + publisher = {Springer-Verlag},
  21.987 + address = {Berlin, Heidelberg},
  21.988 +}
  21.989 +
  21.990 +@article{kennedy1991interactive,
  21.991 +  author    = {Ken Kennedy and
  21.992 +               Kathryn S. McKinley and
  21.993 +               Chau-Wen Tseng},
  21.994 +  title     = {Interactive Parallel Programming using the ParaScope Editor},
  21.995 +  journal   = {IEEE Transactions on Parallel and Distributed Systems},
  21.996 +  volume    = {2},
  21.997 +  number    = {3},
  21.998 +  year      = {1991},
  21.999 +  pages     = {329-341},
 21.1000 +  ee        = {http://doi.ieeecomputersociety.org/10.1109/71.86108},
 21.1001 +  bibsource = {DBLP, http://dblp.uni-trier.de}
 21.1002 +}
 21.1003 +
 21.1004 +@inproceedings{wloka2009refactoring,
 21.1005 + author = {Wloka, Jan and Sridharan, Manu and Tip, Frank},
 21.1006 + title = {Refactoring for Reentrancy},
 21.1007 + booktitle = {Proceedings of the the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering},
 21.1008 + series = {ESEC/FSE '09},
 21.1009 + year = {2009},
 21.1010 + isbn = {978-1-60558-001-2},
 21.1011 + venue = {Amsterdam, The Netherlands},
 21.1012 + pages = {173--182},
 21.1013 + numpages = {10},
 21.1014 + url = {http://doi.acm.org/10.1145/1595696.1595723},
 21.1015 + doi = {10.1145/1595696.1595723},
 21.1016 + acmid = {1595723},
 21.1017 + publisher = {ACM},
 21.1018 + address = {New York, NY, USA},
 21.1019 + keywords = {program transformation, reentrant code, refactoring},
 21.1020 +}
 21.1021 +
 21.1022 +@article{dig2011refactoring,
 21.1023 +  author    = {Danny Dig},
 21.1024 +  title     = {A Refactoring Approach to Parallelism},
 21.1025 +  journal   = {IEEE Software},
 21.1026 +  volume    = {28},
 21.1027 +  number    = {1},
 21.1028 +  year      = {2011},
 21.1029 +  pages     = {17-22},
 21.1030 +  ee        = {http://doi.ieeecomputersociety.org/10.1109/MS.2011.1},
 21.1031 +  bibsource = {DBLP, http://dblp.uni-trier.de}
 21.1032 +}
 21.1033 +
 21.1034 +@article{brown2013cost,
 21.1035 +  author    = {Christopher Brown and
 21.1036 +               Marco Danelutto and
 21.1037 +               Kevin Hammond and
 21.1038 +               Peter Kilpatrick and
 21.1039 +               Archibald Elliott},
 21.1040 +  title     = {Cost-Directed Refactoring for Parallel Erlang Programs},
 21.1041 +  journal   = {International Journal of Parallel Programming},
 21.1042 +  volume    = {42},
 21.1043 +  number    = {4},
 21.1044 +  year      = {2014},
 21.1045 +  pages     = {564-582},
 21.1046 +  ee        = {http://dx.doi.org/10.1007/s10766-013-0266-5},
 21.1047 +  bibsource = {DBLP, http://dblp.uni-trier.de}
 21.1048 +}
 21.1049 +
 21.1050 +@inproceedings{brown2012language,
 21.1051 + author = {Brown, Christopher and Hammond, Kevin and Danelutto, Marco and Kilpatrick, Peter},
 21.1052 + title = {A Language-Independent Parallel Refactoring Framework},
 21.1053 + booktitle = {Proceedings of the Fifth Workshop on Refactoring Tools},
 21.1054 + series = {WRT '12},
 21.1055 + year = {2012},
 21.1056 + isbn = {978-1-4503-1500-5},
 21.1057 + venue = {Rapperswil, Switzerland},
 21.1058 + pages = {54--58},
 21.1059 + numpages = {5},
 21.1060 + url = {http://doi.acm.org/10.1145/2328876.2328884},
 21.1061 + doi = {10.1145/2328876.2328884},
 21.1062 + acmid = {2328884},
 21.1063 + publisher = {ACM},
 21.1064 + address = {New York, NY, USA},
 21.1065 + keywords = {C/C++, Erlang, ParaPhrase, concurrency, parallelism, patterns, refactoring, skeletons},
 21.1066 +}
 21.1067 +
 21.1068 +@incollection{hammond2013paraphrase,
 21.1069 +	year={2013},
 21.1070 +	isbn={978-3-642-35886-9},
 21.1071 +	booktitle={Formal Methods for Components and Objects},
 21.1072 +	volume={7542},
 21.1073 +	series={Lecture Notes in Computer Science},
 21.1074 +	editor={Beckert, Bernhard and Damiani, Ferruccio and de Boer, FrankS. and Bonsangue, MarcelloM.},
 21.1075 +	doi={10.1007/978-3-642-35887-6_12},
 21.1076 +	title={The ParaPhrase Project: Parallel Patterns for Adaptive Heterogeneous Multicore Systems},
 21.1077 +	url={http://dx.doi.org/10.1007/978-3-642-35887-6_12},
 21.1078 +	publisher={Springer-Verlag},
 21.1079 +	address={Berlin, Heidelberg},
 21.1080 +	author={Hammond, Kevin and Aldinucci, Marco and Brown, Christopher and Cesarini, Francesco and Danelutto, Marco and González-Vélez, Horacio and Kilpatrick, Peter and Keller, Rainer and Rossbory, Michael and Shainer, Gilad},
 21.1081 +	pages={218-236},
 21.1082 +	language={English}
 21.1083 +}
 21.1084 +
 21.1085 +@inproceedings{li2006comparative,
 21.1086 + author = {Li, Huiqing and Thompson, Simon},
 21.1087 + title = {Comparative Study of Refactoring Haskell and Erlang Programs},
 21.1088 + booktitle = {Proceedings of the Sixth IEEE International Workshop on Source Code Analysis and Manipulation},
 21.1089 + series = {SCAM '06},
 21.1090 + year = {2006},
 21.1091 + isbn = {0-7695-2353-6},
 21.1092 + pages = {197--206},
 21.1093 + numpages = {10},
 21.1094 + url = {http://dx.doi.org/10.1109/SCAM.2006.8},
 21.1095 + doi = {10.1109/SCAM.2006.8},
 21.1096 + acmid = {1174113},
 21.1097 + publisher = {IEEE Computer Society},
 21.1098 + address = {Washington, DC, USA},
 21.1099 +}
 21.1100 +
 21.1101 +@article{hindley1969principal,
 21.1102 +    author = {Hindley, R.},
 21.1103 +    doi = {10.2307/1995158},
 21.1104 +    issn = {00029947},
 21.1105 +    journal = {Transactions of the American Mathematical Society},
 21.1106 +    pages = {29--60},
 21.1107 +    publisher = {American Mathematical Society},
 21.1108 +    title = {{The Principal Type-Scheme of an Object in Combinatory Logic}},
 21.1109 +    volume = {146},
 21.1110 +    year = {1969}
 21.1111 +}
 21.1112 +
 21.1113 +@inproceedings{milner1975logic,
 21.1114 +  title={{A Logic for Computable Functions with Reflexive and Polymorphic Types}},
 21.1115 +  author={Milner, Robin and Morris, Lockwood and Newey, Malcolm},
 21.1116 +  year={1975},
 21.1117 +  booktitle = {Proceedings of the Conference on Proving and Improving Programs}
 21.1118 +}
 21.1119 +
 21.1120 +@phdthesis{damas1984type,
 21.1121 +  title={Type assignment in programming languages.},
 21.1122 +  author={Damas, Luis Manuel Martins},
 21.1123 +  year={1984},
 21.1124 +  school={University of Edinburgh}
 21.1125 +}
 21.1126 +
 21.1127 +@article{paulson1989foundation,
 21.1128 +  author    = {Lawrence C. Paulson},
 21.1129 +  title     = {The Foundation of a Generic Theorem Prover},
 21.1130 +  journal   = {Journal of Automated Reasoning},
 21.1131 +  volume    = {5},
 21.1132 +  number    = {3},
 21.1133 +  year      = {1989},
 21.1134 +  pages     = {363-397},
 21.1135 +  ee        = {http://dx.doi.org/10.1007/BF00248324},
 21.1136 +  bibsource = {DBLP, http://dblp.uni-trier.de}
 21.1137 +}
 21.1138 +
 21.1139 +@inproceedings{weeks2006whole,
 21.1140 + author = {Weeks, Stephen},
 21.1141 + title = {Whole-program Compilation in MLton},
 21.1142 + booktitle = {Proceedings of the 2006 Workshop on ML},
 21.1143 + series = {ML '06},
 21.1144 + year = {2006},
 21.1145 + isbn = {1-59593-483-9},
 21.1146 + venue = {Portland, Oregon, USA},
 21.1147 + pages = {1--1},
 21.1148 + numpages = {1},
 21.1149 + url = {http://doi.acm.org/10.1145/1159876.1159877},
 21.1150 + doi = {10.1145/1159876.1159877},
 21.1151 + acmid = {1159877},
 21.1152 + publisher = {ACM},
 21.1153 + address = {New York, NY, USA},
 21.1154 +}
 21.1155 +
 21.1156 +@online{smlnj2013standard,
 21.1157 +  url={http://www.smlnj.org/},
 21.1158 +  title={Standard ML of New Jersey},
 21.1159 +  author={SML/NJ Fellowship, The},
 21.1160 +  urldate={2014-05-15}
 21.1161 +}
 21.1162 +
 21.1163 +@online{romanenko2014moscow,
 21.1164 +  url={http://mosml.org/},
 21.1165 +  title={Moscow ML},
 21.1166 +  year={2014-05-15}
 21.1167 +}
 21.1168 +
 21.1169 +@online{matthews2014documentation,
 21.1170 +  url={http://polyml.org/Doc.html},
 21.1171 +  title={Documentation},
 21.1172 +  author={Matthews, David C.J.},
 21.1173 +  year={2014-05-15}
 21.1174 +}
 21.1175 +
 21.1176 +@techreport{matthews1995papers,
 21.1177 +  author =	 {Matthews, David CJ},
 21.1178 +  title = 	 {{Papers on Poly/ML}},
 21.1179 +  year = 	 {1989},
 21.1180 +  month = 	 {feb},
 21.1181 +  institution =  {Computer Laboratory, University of Cambridge}
 21.1182 +}
 21.1183 +
 21.1184 +@techreport{matthews1991distributed,
 21.1185 +  title={A Distributed Concurrent Implementation of Standard ML},
 21.1186 +  author={Matthews, David CJ},
 21.1187 +  year={1991},
 21.1188 +  institution={LFCS, Department of Computer Science, University of Edinburgh}
 21.1189 +}
 21.1190 +
 21.1191 +@inproceedings{reppy1993concurrent,
 21.1192 + author = {Reppy, John H.},
 21.1193 + title = {Concurrent ML: Design, Application and Semantics},
 21.1194 + booktitle = {Functional Programming, Concurrency, Simulation and Automated Reasoning: International Lecture Series 1991-1992},
 21.1195 + venue={McMaster University, Hamilton, Ontario, Canada},
 21.1196 + year = {1993},
 21.1197 + isbn = {3-540-56883-2},
 21.1198 + pages = {165--198},
 21.1199 + numpages = {34},
 21.1200 + publisher = {Springer-Verlag}
 21.1201 +}
 21.1202 +
 21.1203 +@manual{wenzel2013isar,
 21.1204 +  title={The Isabelle/Isar Reference Manual},
 21.1205 +  author={Wenzel, Makarius and others},
 21.1206 +  year={2013},
 21.1207 +  url={http://isabelle.in.tum.de/dist/Isabelle2013-2/doc/isar-ref.pdf}
 21.1208 +}
 21.1209 +
 21.1210 +@incollection{wenzel1999isar,
 21.1211 +  volume={1690},
 21.1212 +  series={Lecture Notes in Computer Science},
 21.1213 +  editor={Bertot, Yves and Dowek, Gilles and Théry, Laurent and Hirschowitz, André and Paulin, Christine},
 21.1214 +  doi={10.1007/3-540-48256-3_12},
 21.1215 +  title={Isar — A Generic Interpretative Approach to Readable Formal Proof Documents},
 21.1216 +  author={Wenzel, Makarius},
 21.1217 +  booktitle={Theorem Proving in Higher Order Logics},
 21.1218 +  pages={167--183},
 21.1219 +  year={1999},
 21.1220 +  publisher={Springer-Verlag},
 21.1221 +  address={Berlin, Heidelberg},
 21.1222 +  isbn={978-3-540-66463-5}
 21.1223 +}
 21.1224 +
 21.1225 +@inproceedings{wenzel2014uipide,
 21.1226 +  title={Asynchronous User Interaction and Tool Integration in Isabelle/PIDE},
 21.1227 +  author={Wenzel, Makarius},
 21.1228 +  year={2014},
 21.1229 +    month            = {jul},
 21.1230 +    booktitle={Proceedings of the 5th Int.\ Conference on Interactive Theorem Proving},
 21.1231 +    venue={Vienna, Austria},
 21.1232 +    editor           = {Klein, G and Gamboa, R}
 21.1233 +}
 21.1234 +
 21.1235 +@inproceedings{matichuk2014isabelle,
 21.1236 +    author = {Matichuk, Daniel and Wenzel, Makarius and Murray, Toby},
 21.1237 +    month            = {jul},
 21.1238 +    year             = {2014},
 21.1239 +    title            = {An Isabelle Proof Method Language},
 21.1240 +    booktitle={Proceedings of the 5th Int.\ Conference on Interactive Theorem Proving},
 21.1241 +    publisher        = {Springer-Verlag},
 21.1242 +    venue={Vienna, Austria},
 21.1243 +    editor           = {Klein, G and Gamboa, R}
 21.1244 +}
 21.1245 +
 21.1246 +@article{wenzel2012asynchronous,
 21.1247 +  author    = {Makarius Wenzel},
 21.1248 +  title     = {Asynchronous Proof Processing with Isabelle/Scala and Isabelle/jEdit},
 21.1249 +  journal   = {Electronic Notes in Theoretical Computer Science},
 21.1250 +  volume    = {285},
 21.1251 +  year      = {2012},
 21.1252 +  pages     = {101--114}
 21.1253 +}
 21.1254 +
 21.1255 +@manual{wenzel2013isabelle,
 21.1256 +  title={The Isabelle System Manual},
 21.1257 +  author={Wenzel, Makarius and Berghofer, Stefan},
 21.1258 +  year={2013},
 21.1259 +  url={http://isabelle.in.tum.de/dist/Isabelle2013-2/doc/system.pdf}
 21.1260 +}
 21.1261 +
 21.1262 +@inproceedings{wenzel2011isabelle,
 21.1263 + author = {Wenzel, Makarius},
 21.1264 + title = {Isabelle As Document-Oriented Proof Assistant},
 21.1265 + booktitle = {Proceedings of the 18th Calculemus and 10th International Conference on Intelligent Computer Mathematics},
 21.1266 + series = {MKM'11},
 21.1267 + year = {2011},
 21.1268 + isbn = {978-3-642-22672-4},
 21.1269 + venue = {Bertinoro, Italy},
 21.1270 + pages = {244--259},
 21.1271 + numpages = {16},
 21.1272 + url = {http://dl.acm.org/citation.cfm?id=2032713.2032732},
 21.1273 + acmid = {2032732},
 21.1274 + publisher = {Springer-Verlag},
 21.1275 + address = {Berlin, Heidelberg},
 21.1276 +}
 21.1277 +
 21.1278 +@inproceedings{wenzel2012isabelle,
 21.1279 + author = {Wenzel, Makarius},
 21.1280 + title = {Isabelle/jEdit: A Prover IDE Within the PIDE Framework},
 21.1281 + booktitle = {Proceedings of the 11th International Conference on Intelligent Computer Mathematics},
 21.1282 + series = {CICM'12},
 21.1283 + year = {2012},
 21.1284 + isbn = {978-3-642-31373-8},
 21.1285 + venue = {Bremen, Germany},
 21.1286 + pages = {468--471},
 21.1287 + numpages = {4},
 21.1288 + url = {http://dx.doi.org/10.1007/978-3-642-31374-5_38},
 21.1289 + doi = {10.1007/978-3-642-31374-5_38},
 21.1290 + acmid = {2352851},
 21.1291 + publisher = {Springer-Verlag},
 21.1292 + address = {Berlin, Heidelberg},
 21.1293 +}
 21.1294 +
 21.1295 +@manual{wenzel2013jedit,
 21.1296 +  title={Isabelle/jEdit},
 21.1297 +  author={Wenzel, Makarius},
 21.1298 +  year={2013},
 21.1299 +  url={http://isabelle.in.tum.de/dist/doc/jedit.pdf}
 21.1300 +}
 21.1301 +
 21.1302 +@book{chlipala2011certified,
 21.1303 + author = {Chlipala, Adam},
 21.1304 + title = {Certified Programming with Dependent Types: A Pragmatic Introduction to the Coq Proof Assistant},
 21.1305 + year = {2013},
 21.1306 + isbn = {0262026651, 9780262026659},
 21.1307 + publisher = {The MIT Press},
 21.1308 +} 
 21.1309 +
 21.1310 +@book{nipkow2002isabelle,
 21.1311 + author = {Nipkow, Tobias and Wenzel, Makarius and Paulson, Lawrence C.},
 21.1312 + title = {Isabelle/HOL: A Proof Assistant for Higher-order Logic},
 21.1313 + year = {2013},
 21.1314 + month = {12},
 21.1315 + isbn = {3-540-43376-7},
 21.1316 + publisher = {Springer-Verlag}
 21.1317 +}
 21.1318 +
 21.1319 +@manual{wenzel2013impl,
 21.1320 +  title={The Isabelle/Isar Implementation},
 21.1321 +  author={Wenzel, Makarius},
 21.1322 +  year={2013},
 21.1323 +  note={With contributions by Florian Haftmann and Larry Paulson},
 21.1324 +  url={http://isabelle.in.tum.de/doc/implementation.pdf}
 21.1325 +}
 21.1326 +
 21.1327 +@book{van1979checking,
 21.1328 +   title = {Checking Landau's ``Grundlagen'' in the Automath system},
 21.1329 +   author = {van Benthem Jutting, L. S.},
 21.1330 +   publisher = {Mathematisch Centrum},
 21.1331 +   address = {Eindhoven},
 21.1332 +   year = {1979}
 21.1333 +}
 21.1334 +
 21.1335 +@article{landin1966next,
 21.1336 +  author    = {Peter J. Landin},
 21.1337 +  title     = {The next 700 programming languages},
 21.1338 +  journal   = {Communications of the ACM},
 21.1339 +  volume    = {9},
 21.1340 +  number    = {3},
 21.1341 +  year      = {1966},
 21.1342 +  pages     = {157--166}
 21.1343 +}
 21.1344 +
 21.1345 +@inproceedings{paulson1988isabelle,
 21.1346 + author = {Paulson, Lawrence C.},
 21.1347 + title = {Isabelle: The Next Seven Hundred Theorem Provers},
 21.1348 + booktitle = {Proceedings of the 9th International Conference on Automated Deduction},
 21.1349 + year = {1988},
 21.1350 + isbn = {3-540-19343-X},
 21.1351 + pages = {772--773},
 21.1352 + numpages = {2},
 21.1353 + url = {http://dl.acm.org/citation.cfm?id=648228.751990},
 21.1354 + acmid = {751990},
 21.1355 + publisher = {Springer-Verlag},
 21.1356 + address = {London, UK, UK},
 21.1357 +}
 21.1358 +
 21.1359 +@article{gentzen1964investigations,
 21.1360 +  title={Investigations into logical deduction},
 21.1361 +  author={Gentzen, Gerhard},
 21.1362 +  journal={American philosophical quarterly},
 21.1363 +  volume={1},
 21.1364 +  number={4},
 21.1365 +  pages={288--306},
 21.1366 +  year={1964},
 21.1367 +  publisher={JSTOR}
 21.1368 +}
 21.1369 +
 21.1370 +@article{coquand1988calculus,
 21.1371 +  author    = {Thierry Coquand and
 21.1372 +               G{\'e}rard P. Huet},
 21.1373 +  title     = {The Calculus of Constructions},
 21.1374 +  journal   = {Information and Computation},
 21.1375 +  volume    = {76},
 21.1376 +  number    = {2/3},
 21.1377 +  year      = {1988},
 21.1378 +  pages     = {95-120},
 21.1379 +  ee        = {http://dx.doi.org/10.1016/0890-5401(88)90005-3},
 21.1380 +  bibsource = {DBLP, http://dblp.uni-trier.de}
 21.1381 +}
 21.1382 +
 21.1383 +@article{fitzgerald2008vdmtools,
 21.1384 +  author    = {John S. Fitzgerald and
 21.1385 +               Peter Gorm Larsen and
 21.1386 +               Shin Sahara},
 21.1387 +  title     = {VDMTools: advances in support for formal modeling in VDM},
 21.1388 +  journal   = {SIGPLAN Notices},
 21.1389 +  volume    = {43},
 21.1390 +  number    = {2},
 21.1391 +  year      = {2008},
 21.1392 +  pages     = {3-11},
 21.1393 +  ee        = {http://doi.acm.org/10.1145/1361213.1361214},
 21.1394 +  bibsource = {DBLP, http://dblp.uni-trier.de}
 21.1395 +}
 21.1396 +
 21.1397 +@incollection{fitzgerald2008vienna,
 21.1398 +	title = {Vienna Development Method},
 21.1399 +	author = {Fitzgerald, John S. and Larsen, Peter Gorm and Verhoef, Marcel},
 21.1400 +	publisher = {John Wiley \& Sons, Inc.},
 21.1401 +	isbn = {9780470050118},
 21.1402 +	url = {http://dx.doi.org/10.1002/9780470050118.ecse447},
 21.1403 +	doi = {10.1002/9780470050118.ecse447},
 21.1404 +	booktitle = {Wiley Encyclopedia of Computer Science and Engineering},
 21.1405 +	year = {2007}
 21.1406 +}
 21.1407 +
 21.1408 +@book{bjorner2006software,
 21.1409 +  title={Software Engineering},
 21.1410 +  author={Bj{\o}rner, Dines},
 21.1411 +  volume={1,2,3},
 21.1412 +  year={2006},
 21.1413 +  series={Texts in Theoretical Computer Science},
 21.1414 +  publisher={Springer-Verlag}
 21.1415 +}
 21.1416 +
 21.1417 +@article{back2010structured,
 21.1418 +  author    = {Ralph-Johan Back},
 21.1419 +  title     = {Structured derivations: a unified proof style for teaching mathematics},
 21.1420 +  journal   = {Formal Aspects of Computing},
 21.1421 +  volume    = {22},
 21.1422 +  number    = {5},
 21.1423 +  year      = {2010},
 21.1424 +  pages     = {629--661}
 21.1425 +}
 21.1426 +
 21.1427 +@article{wenzel2007generic,
 21.1428 +  title={Isabelle/Isar -- a generic framework for human-readable proof documents},
 21.1429 +  author={Wenzel, Makarius},
 21.1430 +  journal={From Insight to Proof -- Festschrift in Honour of Andrzej Trybulec},
 21.1431 +  volume={10},
 21.1432 +  number={23},
 21.1433 +  pages={277--298},
 21.1434 +  year={2007}
 21.1435 +}
 21.1436 +
 21.1437 +@inproceedings{neuper2012automated,
 21.1438 +  author    = {Walther Neuper},
 21.1439 +  title     = {Automated Generation of User Guidance by Combining Computation
 21.1440 +               and Deduction},
 21.1441 +  editor    = {Pedro Quaresma and
 21.1442 +               Ralph-Johan Back},
 21.1443 +  booktitle = {Proceedings of the First Workshop on CTP Components for Educational
 21.1444 +               Software (THedu'11)},
 21.1445 +  year      = {2011},
 21.1446 +  pages     = {82--101},
 21.1447 +  series    = {EPTCS},
 21.1448 +  volume    = {79},
 21.1449 +  venue     = {Wroclaw, Poland},
 21.1450 +  month     = {7}
 21.1451 +}
 21.1452 +
 21.1453 +@article{daroczy2013error,
 21.1454 +  title={Error-Patterns within ``Next-Step-Guidance'' in TP-based Educational Systems},
 21.1455 +  author={Dar{\'o}czy, Gabriella and Neuper, Walther},
 21.1456 +  journal={Electronic Journal of Mathematics \& Technology},
 21.1457 +  volume={7},
 21.1458 +  number={2},
 21.1459 +  year={2013}
 21.1460 +}
 21.1461 +
 21.1462 +@mastersthesis{kienleitner2012towards,
 21.1463 +  title={Towards ``NextStep Userguidance'' in a Mechanized Math Assistant},
 21.1464 +  author={Kienleitner, Markus},
 21.1465 +  school={Graz University of Technology},
 21.1466 +  year={2012},
 21.1467 +  type={Bachelor's thesis}
 21.1468 +}
 21.1469 +
 21.1470 +@inproceedings{wenzel2009parallel,
 21.1471 +	author = {Makarius Wenzel},
 21.1472 +	title = {{Parallel Proof Checking in {Isabelle/Isar}}},
 21.1473 +	booktitle = {ACM SIGSAM Workshop on Programming Languages for Mechanized Mathematics Systems (PLMMS 2009)},
 21.1474 +	year = {2009},
 21.1475 +	editor = {Dos Reis, G and L Th\'ery},
 21.1476 +	publisher = {ACM Digital Library}
 21.1477 +}
 21.1478 +
 21.1479 +@techreport{buchberger1984mathematik,
 21.1480 +  author={Buchberger, Bruno},
 21.1481 +  title={Mathematik für Informatiker II (Problemlösestrategien und Algorithmentypen)},
 21.1482 +  institution={RISC-Linz},
 21.1483 +  type={Lecture notes},
 21.1484 +  number={CAMP-Publ.-No. 84-4.0},
 21.1485 +  year={1984}
 21.1486 +}
 21.1487 +
 21.1488 +@mastersthesis{lehnfeld2011verbindung,
 21.1489 +  title={Verbindung von 'Computation' und 'Deduction' im \isac{}-System},
 21.1490 +  author={Lehnfeld, Mathias},
 21.1491 +  school={Technische Universität Wien},
 21.1492 +  year={2011},
 21.1493 +  type={Bachelor's project report}
 21.1494 +}
 21.1495 +
 21.1496 +@phdthesis{krauss2009automating,
 21.1497 +  author    = {Alexander Krauss},
 21.1498 +  title     = {Automating recursive definitions and termination proofs
 21.1499 +               in higher-order logic},
 21.1500 +  year      = {2009},
 21.1501 +  pages     = {1-127},
 21.1502 +  school    = {Technische Universität München},
 21.1503 +  ee        = {http://mediatum2.ub.tum.de/doc/681651/document.pdf},
 21.1504 +  bibsource = {DBLP, http://dblp.uni-trier.de}
 21.1505 +}
 21.1506 +
 21.1507 +@mastersthesis{rocnik2012interactive,
 21.1508 +  author={Jan Rocnik},
 21.1509 +  title={Interactive Course Material for Signal Processing based on Isabelle/\isac{}},
 21.1510 +  type={Bachelor's thesis},
 21.1511 +  year={2012},
 21.1512 +  url={http://www.ist.tugraz.at/projects/isac/publ/jrocnik_bakk.pdf},
 21.1513 +  school={Graz University of Technology}
 21.1514 +}
 21.1515 +
 21.1516 +@inproceedings{haftmann2010code,
 21.1517 + author = {Haftmann, Florian and Nipkow, Tobias},
 21.1518 + title = {Code Generation via Higher-order Rewrite Systems},
 21.1519 + booktitle = {Proceedings of the 10th International Conference on Functional and Logic Programming},
 21.1520 + series = {FLOPS'10},
 21.1521 + year = {2010},
 21.1522 + isbn = {3-642-12250-7, 978-3-642-12250-7},
 21.1523 + venue = {Sendai, Japan},
 21.1524 + pages = {103--117},
 21.1525 + numpages = {15},
 21.1526 + url = {http://dx.doi.org/10.1007/978-3-642-12251-4_9},
 21.1527 + doi = {10.1007/978-3-642-12251-4_9},
 21.1528 + acmid = {2175441},
 21.1529 + publisher = {Springer-Verlag},
 21.1530 + address = {Berlin, Heidelberg},
 21.1531 +}
 21.1532 +
 21.1533 +@book{charniak2014artificial,
 21.1534 +  author    = {Charniak, Eugene and Riesbeck, Christopher K. and McDermott, Drew V. and Meehan, James R.},
 21.1535 +  title     = {Artificial Intelligence Programming},
 21.1536 +  edition   = {2},
 21.1537 +  isbn      = {0898596092},
 21.1538 +  year      = {1987},
 21.1539 +  publisher = {Psychology Press}
 21.1540 +}
 21.1541 +
 21.1542 +@inproceedings{aspinall2000proof,
 21.1543 + author = {Aspinall, David},
 21.1544 + title = {Proof General: A Generic Tool for Proof Development},
 21.1545 + booktitle = {Proceedings of the 6th International Conference on Tools and Algorithms for Construction and Analysis of Systems: Held As Part of the European Joint Conferences on the Theory and Practice of Software, ETAPS 2000},
 21.1546 + series = {TACAS '00},
 21.1547 + year = {2000},
 21.1548 + isbn = {3-540-67282-6},
 21.1549 + pages = {38--42},
 21.1550 + numpages = {5},
 21.1551 + url = {http://dl.acm.org/citation.cfm?id=646484.691773},
 21.1552 + acmid = {691773},
 21.1553 + publisher = {Springer-Verlag},
 21.1554 + address = {London, UK, UK},
 21.1555 +}
 21.1556 +
 21.1557 +@book{simon1965shape,
 21.1558 +    address = {New York},
 21.1559 +    author = {Simon, H. A.},
 21.1560 +    publisher = {Harper \& Row},
 21.1561 +    title = {{The Shape of Automation for Men and Management}},
 21.1562 +    year = {1965}
 21.1563 +}
 21.1564 +
 21.1565 +@book{gordon1993introduction,
 21.1566 +  title={Introduction to HOL: A theorem proving environment for higher-order logic},
 21.1567 +  author={Gordon, Michael JC and Melham, Tom F},
 21.1568 +  year={1993},
 21.1569 +  publisher={Cambridge University Press}
 21.1570 +}
 21.1571 +
 21.1572 +@online{harrison2014hol,
 21.1573 +  author={Harrison, John},
 21.1574 +  title={HOL Light},
 21.1575 +  urldate={2014-05-26},
 21.1576 +  url={http://www.cl.cam.ac.uk/~jrh13/hol-light/}
 21.1577 +}
 21.1578 +
 21.1579 +@article{appel1977solution,
 21.1580 +  title={The solution of the four-color-map problem},
 21.1581 +  author={Appel, Kenneth and Haken, Wolfgang},
 21.1582 +  journal={Scientific American},
 21.1583 +  volume={237},
 21.1584 +  pages={108--121},
 21.1585 +  year={1977}
 21.1586 +}
 21.1587 +
 21.1588 +@incollection{hales2011historical,
 21.1589 +	year={2011},
 21.1590 +	isbn={978-1-4614-1128-4},
 21.1591 +	booktitle={The Kepler Conjecture},
 21.1592 +	editor={Lagarias, Jeffrey C.},
 21.1593 +	doi={10.1007/978-1-4614-1129-1_3},
 21.1594 +	title={Historical Overview of the Kepler Conjecture},
 21.1595 +	url={http://dx.doi.org/10.1007/978-1-4614-1129-1_3},
 21.1596 +	publisher={Springer New York},
 21.1597 +	author={Hales, ThomasC.},
 21.1598 +	pages={65-82},
 21.1599 +	language={English}
 21.1600 +}
 21.1601 +
 21.1602 +@techreport{sharangpani1994statistical,
 21.1603 +  title={Statistical Analysis of Floating Point Flaw in the Pentium Processor},
 21.1604 +  author={Sharangpani, HP and Barton, ML},
 21.1605 +  institution={Intel Corporation},
 21.1606 +  year={1994}
 21.1607 +}
 21.1608 +
 21.1609 +@online{intel1997intel,
 21.1610 +  year={1997},
 21.1611 +  title={Intel® Pentium® Processor - Invalid Instruction Erratum Overview},
 21.1612 +  urldate={2014-05-26},
 21.1613 +  url={http://www.intel.com/support/processors/pentium/ppiie/index.htm}
 21.1614 +}
 21.1615 +
 21.1616 +@article{brandt2007theorem,
 21.1617 +  title={Theorem Proving in Higher Order Logics},
 21.1618 +  author={Brandt, Klaus Schneider Jens},
 21.1619 +  year={2007},
 21.1620 +  publisher={Springer-Verlag}
 21.1621 +}
 21.1622 +
 21.1623 +@online{isabelle2013news,
 21.1624 +  title={isabelle.in.tum.de/dist/Isabelle2013-2/NEWS},
 21.1625 +  url={http://isabelle.in.tum.de/dist/Isabelle2013-2/NEWS},
 21.1626 +  urldate={2014-05-27},
 21.1627 +  year=2013
 21.1628 +}
 21.1629 +
 21.1630 +@online{isac2014knowledge,
 21.1631 +  title={www.ist.tugraz.at/projects/isac/www/kbase/thy/index\_thy.html},
 21.1632 +  url={http://www.ist.tugraz.at/projects/isac/www/kbase/thy/index_thy.html},
 21.1633 +  urldate={2014-06-01}
 21.1634 +}
 21.1635 +
 21.1636 +@online{isac2014pbl,
 21.1637 +  title={www.ist.tugraz.at/projects/isac/www/kbase/pbl/index\_pbl.html},
 21.1638 +  url={http://www.ist.tugraz.at/projects/isac/www/kbase/pbl/index_pbl.html},
 21.1639 +  urldate={2014-06-01}
 21.1640 +}
 21.1641 +
 21.1642 +@online{isac2014met,
 21.1643 +  title={www.ist.tugraz.at/projects/isac/www/kbase/met/index\_met.html},
 21.1644 +  url={http://www.ist.tugraz.at/projects/isac/www/kbase/met/index_met.html},
 21.1645 +  urldate={2014-06-01}
 21.1646 +}
 21.1647 +
 21.1648 +@manual{krauss2013defining,
 21.1649 +  title={Defining recursive functions in Isabelle/HOL},
 21.1650 +  author={Krauss, Alexander},
 21.1651 +  year={2013},
 21.1652 +  url={http://isabelle.in.tum.de/doc/functions.pdf}
 21.1653 +}
 21.1654 +
 21.1655 +@online{google2012flyspeck,
 21.1656 +  title={FlyspeckFactSheet},
 21.1657 +  url={https://code.google.com/p/flyspeck/wiki/FlyspeckFactSheet},
 21.1658 +  urldate={2014-06-01},
 21.1659 +  year={2012}
 21.1660 +}
 21.1661 +
 21.1662 +@online{austriaforum2014zemanek,
 21.1663 +  title={Zemanek, Heinz | Austria-Forum > Biographien},
 21.1664 +  url={http://austria-forum.org/af/Wissenssammlungen/Biographien/Zemanek,_Heinz},
 21.1665 +  urldate={2014-06-02}
 21.1666 +}
 21.1667 +
 21.1668 +@article{gonthier2008formal,
 21.1669 +  title={Formal Proof --- The Four Colour Theorem},
 21.1670 +  author={Gonthier, Georges},
 21.1671 +  journal={Notices of the AMS},
 21.1672 +  volume={55},
 21.1673 +  number={11},
 21.1674 +  pages={1382--1393},
 21.1675 +  year={2008}
 21.1676 +}
 21.1677 +
 21.1678 +@online{pipermail2014clisa,
 21.1679 +  title={The Cl-isabelle-users Archives},
 21.1680 +  url={https://lists.cam.ac.uk/pipermail/cl-isabelle-users/},
 21.1681 +  urldate={2014-06-05},
 21.1682 +  year={2014}
 21.1683 +}
 21.1684 +
 21.1685 +@online{tum2013team,
 21.1686 +  title={The Team},
 21.1687 +  url={http://www21.in.tum.de/team},
 21.1688 +  urldate={2014-06-05},
 21.1689 +  year={2013}
 21.1690 +}
 21.1691 +
 21.1692 +@online{tum2014isainst,
 21.1693 +  title={Installation},
 21.1694 +  url={http://isabelle.in.tum.de/installation.html},
 21.1695 +  urldate={2014-06-05},
 21.1696 +  year={2013}
 21.1697 +}
 21.1698 +
 21.1699 +@online{andriusvelykis2013isaclipse,
 21.1700 +  title={Isabelle/Eclipse},
 21.1701 +  url={http://andriusvelykis.github.io/isabelle-eclipse/},
 21.1702 +  urldate={2014-06-05},
 21.1703 +  year={2013}
 21.1704 +}
 21.1705 +
 21.1706 +@online{emath2013emath,
 21.1707 +  title={E-Math},
 21.1708 +  url={http://emath.eu/en/},
 21.1709 +  urldate={2014-06-05},
 21.1710 +  year={2013}
 21.1711 +}
 21.1712 +
 21.1713 +@online{tum2013isadoc,
 21.1714 +  title={Documentation},
 21.1715 +  url={http://isabelle.in.tum.de/documentation.html},
 21.1716 +  urldate={2014-06-13},
 21.1717 +  year={2013},
 21.1718 +  OPTnote={Theory libraries for Isabelle2013-2}
 21.1719 +}
 21.1720 +
 21.1721 +@online{wiki2014proofass,
 21.1722 +  title={Proof assistant - Wikipedia, the free encyclopedia},
 21.1723 +  url={http://en.wikipedia.org/wiki/Proof_assistant},
 21.1724 +  urldate={2014-06-13},
 21.1725 +  year={2014}
 21.1726 +}
 21.1727 +
 21.1728 +@online{wiki2014atp,
 21.1729 +  title={Automated theorem proving - Wikipedia, the free encyclopedia},
 21.1730 +  url={http://en.wikipedia.org/wiki/Automated_theorem_proving},
 21.1731 +  urldate={2014-06-13},
 21.1732 +  year={2014}
 21.1733 +}
 21.1734 +
 21.1735 +@online{tum2013isahol,
 21.1736 +  title={Isabelle/HOL sessions},
 21.1737 +  url={http://isabelle.in.tum.de/dist/library/HOL/},
 21.1738 +  urldate={2014-06-13}
 21.1739 +}
 21.1740 +
 21.1741 +@online{isac2014pblequuniv,
 21.1742 +  title={www.ist.tugraz.at/projects/isac/www/kbase/pbl/pbl\_equ\_univ.html},
 21.1743 +  url={http://www.ist.tugraz.at/projects/isac/www/kbase/pbl/pbl_equ_univ.html},
 21.1744 +  urldate={2014-06-13}
 21.1745 +}
 21.1746 +
 21.1747 +@mastersthesis{haller2006object,
 21.1748 +  title={An Object-Oriented Programming Model for Event-Based Actors},
 21.1749 +  author={Haller, Philipp},
 21.1750 +  year={2006},
 21.1751 +  school={University of Karlsruhe}
 21.1752 +}
 21.1753 +
 21.1754 +@online{wenzel2014actors,
 21.1755 +  title={Re: [isabelle-dev] scala-2.11.0},
 21.1756 +  url={http://www.mail-archive.com/isabelle-dev@mailbroy.informatik.tu-muenchen.de/msg05253.html},
 21.1757 +  author={Wenzel, Makarius},
 21.1758 +  year={2014},
 21.1759 +  urldate={2014-06-16}
 21.1760 +}
 21.1761 +
 21.1762 +@inproceedings{blanchette2011automatic,
 21.1763 + author = {Blanchette, Jasmin Christian and Bulwahn, Lukas and Nipkow, Tobias},
 21.1764 + title = {Automatic Proof and Disproof in Isabelle/HOL},
 21.1765 + booktitle = {Proceedings of the 8th International Conference on Frontiers of Combining Systems},
 21.1766 + series = {FroCoS'11},
 21.1767 + year = {2011},
 21.1768 + isbn = {978-3-642-24363-9},
 21.1769 + venue = {Saarbrücken, Germany},
 21.1770 + pages = {12--27},
 21.1771 + numpages = {16},
 21.1772 + url = {http://dl.acm.org/citation.cfm?id=2050784.2050787},
 21.1773 + acmid = {2050787},
 21.1774 + publisher = {Springer-Verlag},
 21.1775 + address = {Berlin, Heidelberg},
 21.1776 +}
 21.1777 +
 21.1778 +%@inproceedings{wenzel2007sml,
 21.1779 +%  title={SML with antiquotations embedded into Isabelle/Isar},
 21.1780 +%  author={Wenzel, Makarius and Chaieb, Amine},
 21.1781 +%  booktitle={Workshop on Programming Languages for Mechanized Mathematics (satellite of CALCULEMUS 2007). %Hagenberg, Austria},
 21.1782 +%  year={2007}
 21.1783 +%}
 21.1784 +
 21.1785 +@online{tum2014repo,
 21.1786 +  title={isabelle: Summary},
 21.1787 +  url={http://isabelle.in.tum.de/repos/isabelle/},
 21.1788 +  year={2014},
 21.1789 +  urldate={2014-06-21}
 21.1790 +}
    22.1 --- a/doc-isac/mlehnfeld/master/thesis/preface.tex	Thu Jun 26 10:50:27 2014 +0200
    22.2 +++ b/doc-isac/mlehnfeld/master/thesis/preface.tex	Thu Jun 26 17:19:30 2014 +0200
    22.3 @@ -1,9 +1,19 @@
    22.4 -\chapter{Preface}
    22.5 +\chapter{Acknowledgements}
    22.6  \label{cha:preface}
    22.7  
    22.8 -{\it Übrigens, hier im Vorwort (das bei Diplomarbeiten üblich, bei Bachelorarbeiten 
    22.9 -aber entbehrlich ist) kann man kurz auf die Entstehung  des Dokuments eingehen.
   22.10 -Hier ist auch der Platz für allfällige Danksagungen (\zB an den Betreuer, 
   22.11 -den Begutachter, die Familie, den Hund, ...), Widmungen und philosophische 
   22.12 -Anmerkungen. Das sollte man allerdings auch nicht übertreiben und sich auf 
   22.13 -einen Umfang von maximal zwei Seiten beschränken.}
   22.14 +This page is dedicated to all those who were in any way involved with the outcome of this thesis.
   22.15 +
   22.16 +First of all I want to express my utter gratitude towards my parents. They were there for me, constantly encouraging and supporting me generously in so many ways. Then there are my sisters Clara, Vroni and Anita, whose wonderful personalities make me a happier person. I want to thank my relatives and friends for being such an important part of my life and of who I am. Also, my colleagues are funny and inspiring people without exception and they have made my time as a student a pleasure.
   22.17 +
   22.18 +Last but most certainly not least I am exceedingly grateful to my supervisors, Mag. Volker Christian and Dr. Walther Neuper.
   22.19 +
   22.20 +Walther Neuper's enthusiasm for his project and his eager cooperation and supervision were extremely motivating and made the whole process of creating this thesis very efficient. His gentle and wise personality and his trust in me were very encouraging.
   22.21 +
   22.22 +I owe my great appreciation to Volker Christian for his friendly personal and competent, professional supervision and his interest in my topic. I am overly thankful for his trust, support and encouragement to persue my interests.
   22.23 +
   22.24 +\bigskip
   22.25 +\bigskip
   22.26 +
   22.27 +\noindent
   22.28 +{\em For Franz, my uncle and godfather.}
   22.29 +
    23.1 --- a/doc-isac/mlehnfeld/master/thesis/thesis.tex	Thu Jun 26 10:50:27 2014 +0200
    23.2 +++ b/doc-isac/mlehnfeld/master/thesis/thesis.tex	Thu Jun 26 17:19:30 2014 +0200
    23.3 @@ -1,67 +1,185 @@
    23.4 -\documentclass[master,english]{hgbthesis}
    23.5 -% Zulässige Class Options: 
    23.6 -%   Typ der Arbeit: diplom, master (default), bachelor, praktikum 
    23.7 -%   Hauptsprache: german (default), english
    23.8 -%%------------------------------------------------------------
    23.9 -
   23.10 -\graphicspath{{images/}}    % wo liegen die Bilder? 
   23.11 -\bibliography{literature}  	% Angabe der BibTeX-Datei, % utf8-change
   23.12 -
   23.13 -\newcommand{\isac}{${\cal I}\mkern-2mu{\cal S}\mkern-5mu{\cal AC}$}
   23.14 -\def\sisac{{\footnotesize${\cal I}\mkern-2mu{\cal S}\mkern-5mu{\cal AC}$}}
   23.15 -\newcommand{\isabelle}{{\it Isabelle}}
   23.16 -
   23.17 -%%%----------------------------------------------------------
   23.18 -\begin{document}
   23.19 -%%%----------------------------------------------------------
   23.20 -
   23.21 -% Einträge für ALLE Arbeiten: --------------------------------
   23.22 -\title{Introducing Parallelism and Concurrency in an Educational Mathematics System using Functional Programming}
   23.23 -\author{Mathias Lehnfeld}
   23.24 -\studiengang{Interactive Media}
   23.25 -\studienort{Hagenberg}
   23.26 -\abgabedatum{2014}{06}{18}	% {YYYY}{MM}{DD}
   23.27 -\betreuer{Mag. Volker Christian} % oder \betreuerin{..}
   23.28 -\betreuer{Dr. Walther Neuper} % oder \betreuerin{..}
   23.29 -\firma{
   23.30 -   Institut für Informationssysteme und Computer Medien (IICM)\\
   23.31 -   Fakultät für Informatik - Technische Universität Graz\\
   23.32 -   \\
   23.33 -   Inffeldgasse 16c\\
   23.34 -   8010 Graz - Österreich - Europa
   23.35 -}
   23.36 -
   23.37 -%%%----------------------------------------------------------
   23.38 -\frontmatter
   23.39 -\maketitle
   23.40 -\tableofcontents
   23.41 -%%%----------------------------------------------------------
   23.42 -
   23.43 -% \include{declaration}
   23.44 -\include{preface}
   23.45 -\include{abstract}
   23.46 -\include{kurzfassung}
   23.47 -
   23.48 -%%%----------------------------------------------------------
   23.49 -\mainmatter         % Hauptteil (ab hier arab. Seitenzahlen)
   23.50 -%%%----------------------------------------------------------
   23.51 -
   23.52 -\include{introduction}
   23.53 -\include{fundamentals}
   23.54 -\include{funproglangs_mcsystems}
   23.55 -\include{isabelle_isac}
   23.56 -\include{conclusion}
   23.57 -
   23.58 -%%%----------------------------------------------------------
   23.59 -%%%Anhang
   23.60 -\appendix
   23.61 -\include{appendix_a}
   23.62 -
   23.63 -%%%----------------------------------------------------------
   23.64 -\MakeBibliography
   23.65 -%%%----------------------------------------------------------
   23.66 -
   23.67 -%%%Messbox zur Druckkontrolle
   23.68 -\include{messbox}
   23.69 -
   23.70 -\end{document}
   23.71 +\documentclass[master,english]{hgbthesis}
   23.72 +% Zulässige Class Options: 
   23.73 +%   Typ der Arbeit: diplom, master (default), bachelor, praktikum 
   23.74 +%   Hauptsprache: german (default), english
   23.75 +%%------------------------------------------------------------
   23.76 +
   23.77 +% TODO !!! amdahl & gustafson: set samples to 2000+
   23.78 +% TODO !!! edit: appendix b cd-rom / dvd !?
   23.79 +% TODO !!! appendix b add Repo date
   23.80 +% TODO !!! render all images to png
   23.81 +% TODO !!! README windows line breaks!
   23.82 +% TODO !!! "Overfull" grep
   23.83 +
   23.84 +\graphicspath{{images/}} % wo liegen die Bilder? 
   23.85 +\bibliography{literature} % Angabe der BibTeX-Datei, % utf8-change
   23.86 +
   23.87 +\usepackage{tikz,pgfplots,pgfplotstable}
   23.88 +\pgfplotsset{compat=1.9}
   23.89 +\usetikzlibrary{positioning}
   23.90 +
   23.91 +\usepackage{listings} % syntax highlighting
   23.92 +
   23.93 +\definecolor{color1}{HTML}{a40000}
   23.94 +\definecolor{color2}{HTML}{3465a4}
   23.95 +\definecolor{color3}{HTML}{ce5c00}
   23.96 +\definecolor{color4}{HTML}{4e9a06}
   23.97 +\definecolor{color5}{HTML}{eeeeec}
   23.98 +\definecolor{color6}{HTML}{2e3436}
   23.99 +
  23.100 +\lstnewenvironment{MLCode}[1][] % code environment for ML
  23.101 +{\lstset{
  23.102 +    language=ML,
  23.103 +    keywordstyle=\bfseries\color{color1},
  23.104 +    %identifierstyle=\ttfamily,
  23.105 +    commentstyle=\color{color2},
  23.106 +    stringstyle=\color{color3},
  23.107 +    showstringspaces=false,
  23.108 +    basicstyle=\ttfamily\footnotesize,
  23.109 +    numberstyle=\tiny\ttfamily\color{color6},
  23.110 +    numbers=left,
  23.111 +    numbersep=10pt,
  23.112 +    tabsize=2,
  23.113 +    breaklines=true,
  23.114 +    %prebreak = \raisebox{0ex}[0ex][0ex]{\ensuremath{\hookleftarrow}},
  23.115 +    breakatwhitespace=false,
  23.116 +    %aboveskip={1.5\baselineskip},
  23.117 +    columns=fixed,
  23.118 +    upquote=true,
  23.119 +    extendedchars=true,
  23.120 +    % frame=single,
  23.121 +    backgroundcolor=\color{color5},
  23.122 +    %literate=%
  23.123 +    % *{0}{{{\color[rgb]{1,0,0}0}}}1
  23.124 +    %  {1}{{{\color[rgb]{1,0,0}1}}}1
  23.125 +    %  {2}{{{\color[rgb]{1,0,0}2}}}1
  23.126 +    %  {3}{{{\color[rgb]{1,0,0}3}}}1
  23.127 +    %  {4}{{{\color[rgb]{1,0,0}4}}}1
  23.128 +    %  {5}{{{\color[rgb]{1,0,0}5}}}1
  23.129 +    %  {6}{{{\color[rgb]{1,0,0}6}}}1
  23.130 +    %  {7}{{{\color[rgb]{1,0,0}7}}}1
  23.131 +    %  {8}{{{\color[rgb]{1,0,0}8}}}1
  23.132 +    %  {9}{{{\color[rgb]{1,0,0}9}}}1
  23.133 +    escapeinside={/+}{+/},% makes "/+" and "+/" available for Latex escapes (labels etc.)
  23.134 +    #1
  23.135 +}}
  23.136 +{}
  23.137 +\lstdefinelanguage{Scala}{
  23.138 +  morekeywords={abstract,case,catch,class,def,%
  23.139 +    do,else,extends,false,final,finally,%
  23.140 +    for,if,implicit,import,match,mixin,%
  23.141 +    new,null,object,override,package,%
  23.142 +    private,protected,requires,return,sealed,%
  23.143 +    super,this,throw,trait,true,try,%
  23.144 +    type,val,var,while,with,yield},
  23.145 +  otherkeywords={=>,<-,<\%,<:,>:,\#,@},
  23.146 +  sensitive=true,
  23.147 +  morecomment=[l]{//},
  23.148 +  morecomment=[n]{/*}{*/},
  23.149 +  morestring=[b]",
  23.150 +  morestring=[b]',
  23.151 +  morestring=[b]"""
  23.152 +}
  23.153 +\lstnewenvironment{ScalaCode}[2][] % code environment for Scala
  23.154 +{\noindent\minipage{\linewidth}
  23.155 +  \lstset{
  23.156 +    language=Scala,
  23.157 +    keywordstyle=\bfseries\color{color1},
  23.158 +    %identifierstyle=\ttfamily,
  23.159 +    commentstyle=\color{color2},
  23.160 +    stringstyle=\color{color3},
  23.161 +    showstringspaces=false,
  23.162 +    basicstyle=\ttfamily\footnotesize,
  23.163 +    numberstyle=\tiny\ttfamily\color{color6},
  23.164 +    numbers=left,
  23.165 +    numbersep=10pt,
  23.166 +    tabsize=2,
  23.167 +    breaklines=true,
  23.168 +    prebreak = \raisebox{0ex}[0ex][0ex]{\ensuremath{\hookleftarrow}},
  23.169 +    breakatwhitespace=false,
  23.170 +    aboveskip={1.5\baselineskip},
  23.171 +    columns=fixed,
  23.172 +    upquote=true,
  23.173 +    extendedchars=true,
  23.174 +    % frame=single,
  23.175 +    backgroundcolor=\color{color5},
  23.176 +    %literate=%
  23.177 +    % *{0}{{{\color[rgb]{1,0,0}0}}}1
  23.178 +    %  {1}{{{\color[rgb]{1,0,0}1}}}1
  23.179 +    %  {2}{{{\color[rgb]{1,0,0}2}}}1
  23.180 +    %  {3}{{{\color[rgb]{1,0,0}3}}}1
  23.181 +    %  {4}{{{\color[rgb]{1,0,0}4}}}1
  23.182 +    %  {5}{{{\color[rgb]{1,0,0}5}}}1
  23.183 +    %  {6}{{{\color[rgb]{1,0,0}6}}}1
  23.184 +    %  {7}{{{\color[rgb]{1,0,0}7}}}1
  23.185 +    %  {8}{{{\color[rgb]{1,0,0}8}}}1
  23.186 +    %  {9}{{{\color[rgb]{1,0,0}9}}}1
  23.187 +    label={#1},
  23.188 +    caption={#2},
  23.189 +    captionpos=b,
  23.190 +    escapeinside={/+}{+/}% makes "/+" and "+/" available for Latex escapes (labels etc.)
  23.191 +}}
  23.192 +{\endminipage}
  23.193 +\renewcommand{\lstlistingname}{Snippet} % Caption name
  23.194 +
  23.195 +%\newcommand{\isac}{\texorpdfstring{$\mathcal{I}\mkern-2mu\mathcal{S}\mkern-5mu\mathcal{AC}$}{Isac}}
  23.196 +%\def\sisac{\texorpdfstring{{\footnotesize \isac{}}}{Isac}}
  23.197 +\newcommand{\isac}{\textit{Isac}}
  23.198 +\newcommand{\isabelle}{\textit{Isabelle}}
  23.199 +\newcommand{\imlcode}[1]{{\ttfamily \begin{tabbing}#1\end{tabbing}}}
  23.200 +
  23.201 +%%%----------------------------------------------------------
  23.202 +\begin{document}
  23.203 +%%%----------------------------------------------------------
  23.204 +
  23.205 +% Einträge für ALLE Arbeiten: --------------------------------
  23.206 +\title{Introducing Parallelism in an Educational Mathematics System Developed in a Functional Programming Language}
  23.207 +\author{Mathias Lehnfeld}
  23.208 +\studiengang{Interactive Media}
  23.209 +\studienort{Hagenberg}
  23.210 +\abgabedatum{2014}{06}{18} % {YYYY}{MM}{DD}
  23.211 +\betreuer{Mag. Volker Christian} % oder \betreuerin{..}
  23.212 +\betreuer{Dr. Walther Neuper} % oder \betreuerin{..}
  23.213 +\firma{
  23.214 +   Institut für Informationssysteme und Computer Medien (IICM)\\
  23.215 +   Fakultät für Informatik - Technische Universität Graz\\
  23.216 +   \\
  23.217 +   Inffeldgasse 16c\\
  23.218 +   8010 Graz - Österreich - Europa
  23.219 +}
  23.220 +
  23.221 +%%%----------------------------------------------------------
  23.222 +\frontmatter
  23.223 +\maketitle
  23.224 +\tableofcontents
  23.225 +%%%----------------------------------------------------------
  23.226 +
  23.227 +% \include{declaration}
  23.228 +\include{preface}
  23.229 +\include{abstract}
  23.230 +\include{kurzfassung}
  23.231 +
  23.232 +%%%----------------------------------------------------------
  23.233 +\mainmatter         % Hauptteil (ab hier arab. Seitenzahlen)
  23.234 +%%%----------------------------------------------------------
  23.235 +
  23.236 +\include{introduction}
  23.237 +\include{fundamentals}
  23.238 +\include{funproglangs_mcsystems}
  23.239 +\include{isabelle_isac}
  23.240 +\include{conclusion}
  23.241 +
  23.242 +%%%----------------------------------------------------------
  23.243 +%%%Anhang
  23.244 +\appendix
  23.245 +\include{appendix_a}
  23.246 +\include{appendix_b}
  23.247 +
  23.248 +%%%----------------------------------------------------------
  23.249 +\MakeBibliography
  23.250 +%%%----------------------------------------------------------
  23.251 +
  23.252 +%%%Messbox zur Druckkontrolle
  23.253 +%\include{messbox}
  23.254 +
  23.255 +\end{document}