=====================[ Readings ]===================== Read Chapter 20 of OnLisp. Focus on the Scheme DFT example in 20.1 and step through it by adding debug prints throughout the code, until you understand how it works. You will need a working Scheme set-up for this; see https://cosc59.gitlab.io/scheme/scheme-and-evolution-of-lisp.txt Ch. 20.2 explains how continuations can be approximated in LISP. This approximation is neither complete nor first-class, but stepping though it is quite useful as well. =====================[ Explaining continuations ]================== The textbook explains continuations in Chapter 8.3, with a sketch of relevant history in 8.1. You may skip 8.2 on Exceptions for now. Continuations are an idea that seems to have no "best" explanation. The idea of making first-class functions on the fly out of large chunks of existing code _with associated execution state_ was revolutionary in its time, even though lambdas were already first-class and closures were on-the-fly made-to-order functions that captured some existing storage (cf. eLisp's make-closure you see in bytecode disassembly). Inevitably, people who need to implement Call/CC have an epiphany like https://news.ycombinator.com/item?id=10058104 or https://gnuu.org/2009/03/21/demystifying-continuations-in-ruby/ or some other form of "what would a GOTO or JMP be if it had to be a function?" Even so, there are several levels of understanding continuations: 1. What will this code even do? (see examples below) 2. How to use this in a program? 3. How to implement it in an interpreter, so that it works as specified, regardless of efficiency? 4. How to implement it efficiently in a compiler? (4) has been the subject of much research in PL, and has been absorbed into unlikely environments like Javascript. So it pays to be aware of continuations. =====================[ How to read code with CALL/CC ]===================== Note that defining continuations as functions in Scheme is much easier than, say, in C or Ruby (see above). In Scheme (and LISP) both whole programs and any of their parts _are_ s-expressions. So taking any s-expression (sexp) inside a program and inserting a CALL/CC there makes it perfectly clear just where execution will resume when the continuation made by that call/cc returns at that exact spot---if it's ever called, of course. If it does get called, the CALL/CC sexp will be replaced with whatever argument the call to the saved continuation (CC) is given, while the rest about the program execution will be exactly as it was at the moment of evaluating the original CALL/CC: the local variables and the stack. So it's sort of a GOTO-with-sexps, "come back right here with whatever new value you need processed, I'll be waiting here for you". Notice that you cannot naturally jump to an expression in a LISP program because there is no natural way of labeling sexps (other that defun/define-ing explicitly named functions). CALL/CC creates a way to do so. The returned argument will simply replace the (call/cc ...) node of the s-expression, and the execution will continue with the closure _and the same stack_ as of the moment CALL/CC was called. It may sound like time-travel, and it is in the same sense as cryogenic sleep is. Every relevant aspect of the environment is frozen and brought forward---with the sole exception of the value passed to this restored program. That value is new, replacing what (call/cc ..) originally returned, without the just-unfrozen world knowing. Note that it possibly creates a type conflict between the original type returned by call/cc and the argument type of any subsequent invocation of the saved CC. That is indeed a problem, which "monads" were eventually designed to handle. On the other hand, you will be discarding whatever stack and whatever environment that you called the continuation _from_. This is also like time-travel: once you unfreeze a previously saved world and go there, your pending future will not happen. The beauty of this is, you can freeze and save many copies of the world, including inside each other, and in "global" variables that are deliberately not a part of the game. This construct proves to be very powerful. In the cont-dft.scm example of (let ((node1 (dft-node t1))) (if (eq? node1 'done) 'done (list node1 (dft-node t2)))) or (let ((i1 (dft-node t1)) (i2 (dft-node t2))) (cons i1 i2)) the calls to CALL/CC happen *inside* dft-node, and thus capture everything outside them: the call to LIST that makes the pairs, the EQ? check, and even---for the internal call to dft-node---the state of the other dft-node call. That's why these continuations, when popped from *saved*, print the _pairs of values_ from the respective trees T1 and T2, creating an iterator over the pair of trees. Thus we created an iterator with no more work than describing how we build just one pair of node values! The rest happens automagically. ==================[ More readings ]========================= Every kind of control flow can be expressed with continuations. For an overview of what continuations can do, read Matt Might's http://matt.might.net/articles/programming-with-continuations--exceptions-backtracking-search-threads-generators-coroutines/ ===========[ Logical Programming/Search with Call/CC ]====== Matt Might's AMB search example, solved, is in scheme/amb.scm It turns out that the list iterator is exactly what's needed! See comments in the code and observe the sequence in which continuations are stacked and executed. For the third variable, c, continuations nest three deep, and the "frozen world" includes two other continuations! It's very important to understand this example. The search there iterates over the direct product of the three lists (i.e., the set of all ordered triples). The looping---till a solution is found---is implicit in the use of the *saved* queue for continuations. Suggestion: implement the SAT solver example from Matt Mights' blogpost. Suggestion: implement other patterns discussed in the post.