Much time has passed since my last entry. I’ve been frantically filling gaps in my education, so I’ve had little to say here. My notes are better off on my homepage, where I can better organize them, and incorporate interactive demos.

However, I want to draw attention to delightful surprises that seem unfairly obscure.

**1. Succinct Turing-complete self-interpreters**

John McCarthy’s classic paper showed how to write a Lisp interpreter in Lisp itself. By adding a handful of primitives (quote, atom, eq, car, cdr, cons, cond) to lambda calculus, we get a Turing-complete language where a self-interpreter is easy to write and understand. For contrast, see Turing’s universal machine of 1936.

Researchers have learned more about lambda calculus since 1960, but many
resources seem stuck in the past.
Writing a
Turing-complete interpreter in 7 lines is ostensibly still a big deal.
*The Roots of Lisp* by Paul Graham
praises McCarthy’s self-interpreter but explores no further.
*The Limits of Mathematics*
by Gregory Chaitin chooses Lisp over plain lambda calculus for dubious reasons.
Perhaps McCarthy’s
work is so life-changing that some find it hard to notice new advances.

John Tromp’s fascinating website is a refreshing exception. It led me to a paper by Mogensen, who found a one-line self-interpreter without adding anything to lambda calculus:

(f.(x.f(xx))(x.f(xx)))(em.m(x.x)(mn.em(en))(mv.e(mv)))

(I’ve suppressed the lambdas. Exercise: write a regex substitution that restores them.)

In fact, under some definitions, the program “λq.q(λx.x)” is a self-interpreter.

**2. Hindley-Milner sort**

*Types
and Programming Languages* (TaPL) by Benjamin C. Pierce is a gripping
action thriller. Types are the heroes, and we follow their epic struggle
against the most ancient and powerful foes of computer science and mathematics.

When we first meet them, types are humble guardians of a barebones language that can only express the simplest of computations involving booleans and natural numbers. As the story progresses, types gain additional abilities, enabling them to protect more powerful languages.

However, there seems to be a plot hole when types level up from Hindley-Milner to System F. As a “nice demonstration of the expressive power of pure System F”, the book mentions a program that can sort lists.

The details are left as an exercise to the reader. Working through them, we realize a Hindley-Milner type system is already powerful enough to sort lists. Moreover, the details are far more pleasant in Hindley-Milner because we avoid the ubiquitous type spam of System F.

System F is indeed more powerful than Hindley-Milner and deserves admiration, but because of well-typed self-application and polymorphic identity functions, existential types, and other gems; not because lists can be sorted.

**3. Self-interpreters for total languages**

They said it couldn’t be done.

According to *Breaking
Through the Normalization Barrier: A Self-Interpreter for F-omega* by Matt
Brown and Jens Palsberg, “several books, papers, and web pages” assert
self-interpreters for a strongly normalizing lambda calculus are
impossible. The paper then shows that reports of their non-existence have
been greatly exaggerated.

Indeed, famed researcher Robert Harper writes on his blog that “one limitation of total programming languages is that they are not universal: you cannot write an interpreter for T within T (see Chapter 9 of PFPL for a proof).”, and as of now (April 2017), the Wikipedia article they cite still declares “it is impossible to define a self-interpreter in any of the calculi cited above”, referring to simply typed lambda calculus, System F, and the calculus of constructions.

I was shocked. Surely academics are proficient with diagonalization by now? Did they all overlook a hole in their proofs?

More shocking is the stark simplicity of what Brown and Palsberg call a
*shallow* self-interpreter for System F and System F_{ω}, which is
essentially a typed version of “λq.q(λx.x)”.

It relies on a liberal definition of representation (we only require an injective map from legal terms to normal forms) and self-interpretation (mapping a representation of a term to its value) which is nonetheless still strong enough to upend conventional wisdom.

Which brings us to the most shocking revelation: there is no official agreement on the definition of representation or self-interpretation, or even what we should name these concepts.

Does this mean I should be wary of even the latest textbooks? Part of me hopes not, because I want to avoid learning falsehoods, but another part of me hopes so, for it means I’ve reached the cutting edge of research.

**See for yourself!**

Interactive demos of the above: