======================[ Lambda calculus tutorials ]======================== Lambda calculus has been explained from all kinds of angles over the years. The explanation that works best for you may be any and all of the class textbook's and the following. https://personal.utdallas.edu/~gupta/courses/apl/lambda.pdf https://www.cs.bham.ac.uk/~axj/pub/papers/lambda-calculus.pdf https://www.cse.chalmers.se/research/group/logic/TypesSS05/Extra/geuvers.pdf The following past notes may help with the syntax of lambda calculus. In these notes I tried to initially spell out the notation as much as I could: https://cosc59.gitlab.io/lisp/lambda-calculus-booleans.txt Lambda calculus pair, implemented in LISP: https://cosc59.gitlab.io/lisp/lambda-calculus-pair.txt =========================[ Philosophical notes ]=========================== Lambda calculus (LC) is said to be about _functions_, but it's really about _strings_ that obey a few simple formation and substitution rules. These strings can stand for things like Booleans, Integers, Pairs/Cons cells, etc., _and_ serve as recipes of rewriting other strings. That is to say, these strings are both objects and programs that act on objects. This should remind you of LISP's idea that the fundamental data type, the list of cons cells, and LISP programs are the same thing: all programs are lists, and all composite objects are lists. What's really amazing is that LC's strings and rules actually have all the computing power of more complex models of computation such as CPUs or Turing machines, without any of the features that may seem indispensable. For example: 1. Names aren't needed. De Bruijn's notation shows that only the scoping matters, and it's actually the scoping that makes variables and structures work. We saw a reflection of this in how C functions were compiled: all local variables were just offsets off of the single register we called the Frame Pointer (RBP). Also, the named members of C structs also become offsets. But LC shows it goes deeper than that. 2. Indirection isn't needed. Accessing memory or passing control by an indirect pointer or index seems to be the core feature of the C and CPU models. Even those programming languages that don't use pointers use references of some kind, e.g., in the guise of LISP symbols. However, it turns out that they are merely a convenience, and removing them doesn't take away from the computing power. Amazing. 3. Addresses aren't needed. In von Neumann CPUs and Turing machines there is the concept of addresses in memory and slots on the tape. However, LC shows that addresses aren't needed either---it's all about scopes and places in the strings that you can vary and rewrite. Addresses are secondary, just as names. Computation turns out to be all about string rewriting, just like proofs in algebra and other parts of math. So the key intuition of LC is that when you are writing programs you are really writing proofs. You begin to get this intuition with LISP when writing recursive functions that work on recursively defined data objects, with the CONS cell being essentially the inductive step of your object's definition and the function's correctness proof. But AFAIK it took over 50 years to make this intuition mathematically precise. =========================[ How proofs and programs can go wrong ]===================== https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare/ — Why the right mathematical abstractions matter enormously the programming languages and practice. We will revisit this theme later in the course. https://www.cs.yale.edu/publications/techreports/tr82.pdf — Historic report on the nature and challenges of CS proofs and program verification. Note Alan Perlis' famous foreword to the most famous programming languages textbook, https://cosc59.gitlab.io/making-systems-changeable.txt