Quadtrees are also quite useful for generating fractals.
A very related project of mine, Lambda Screen [0], explores this by encoding these functional quadtrees directly in lambda calculus and rendering the structure based on Church booleans being true (white) or false (black).
With fixed point recursion, this allows for very tiny definitions of IFS fractals. For example, fractals like the Sierpinski triangle/carpet only require ~50 bit of binary lambda calculus [1] [2]!
The annihilating interaction between abstraction and application nodes is well-known in the area of interaction net research to ~correspond to β-reduction, as is also explained in the associated research paper [1].
α-conversion is not required in interaction nets. η-reduction is an additional rule not typically discussed, but see for example [2].
> α-conversion is not required in interaction nets. η-reduction is an additional rule not typically discussed, but see for example [2].
Which makes the sloppy use of "λ-Reduction" in place of "β-reduction"--the only form of reduction or conversion applied here--even less defensible. Maybe them being non-native English speakers is partly to blame?
> While easy, it sadly doesn't preserve semantics.
There is actually an easy way that does preserve semantics at least to WHNF - it's called closed reduction. Mackie has worked on it a bunch (see some resources [1]).
An even simpler implementation is Sinot's token passing.
The problem with both of these approaches is the decreased amount of sharing and potential for parallelism, which is typically the reason for using interaction nets in the first place.
This is quite different. Salvadori's work aims for optimal reduction of the full lambda calculus (which requires something called "bookkeeping"/"oracle"), while HOC works on optimal/parallel reduction of a certain subset of the lambda calculus.
Both approaches have been researched for a long time now, where HOC's subset is typically referred to as "abstract algorithm". For example, a version of the lambdas calculus where any variable can be used at most once (the "affine lambda calculus"), can be reduced optimally with interaction nets without requiring any bookkeeping.
The novel thing about Salvadori's work is that it develops a new (and better explained) bookkeeping mechanism.
just Scott encoding, Scott-Mogensen refers to a meta encoding of LC in LC. Scott's encoding is fine but requires fixpoint recursion for many operations as you said.
Interestingly though, Mogensen's ternary encoding [1] does not require fixpoint recursion and is the most efficient (wrt being compact) encoding in LC known right now.
> Just use [..], seriously
do you have any further arguments for Scott's encoding? There are many number encodings with constant time predecessor, and with any number requiring O(n) space and `add` being this complex, it becomes quite hard to like
First, if we're talking about practicality, and not just theoretical considerations, then the numbers should not be stored as Peano integers a.k.a. base 1 in the first place. Use lists of 8-tuples of bools or native machine integers or whatever, otherwise, you will suffer from O(n) complexities somewhere in your arithmetic, that's just how base 1 works. Fretting over which particular basic operation has to house this linear-over-logarithmic overhead is IMO unproductive: use better data structures instead of counting sticks.
Second, if we're talking about other data structures, especially recursive ones, e.g. lists, then having easily available (and performant) structural recursion is just more useful than having right folds out of the box: those can be recreated easily, but going in the other direction is much more convoluted.
> then the numbers should not be stored as Peano integers a.k.a. base 1 in the first place
That's my point though. The linked n-ary encoding by Mogensen, for example, does not suffer from such complexities. Depending on the reducer's implementation, (supported) operations on my presented de Bruijn numerals are also sublinear. I doubt 8-tuples of Church booleans would be efficient though - except when letting machine instructions leak into LC.
Though I agree that the focus of functional data structures should lie on embedded folds. Compared to nested Church pairs, folded Church tuples (\cons nil.cons a (cons b nil)) or Church n-tuples (\s.s a b c) should be preferred in many cases.
However, I wouldn't define bruijn as being caramelized just yet. Personally, I view as syntactic sugar only syntax that's expanded to the target language by the parser/compiler. In bruijn's case, there is barely any of such syntax sugar aside of number/string/char encodings. Everything else is part of the infix/prefix/mixfix standard library definitions which get substituted as part of the translation to LC.
I don't know any quadtree/lowlevel encoding of LC that could be memoized like that. Though you could, for example, cache the reduction of any term by hash and substitute the matching hashes with its nf. This doesn't really work for lazy reducers or when you do not reduce strongly. And, of course (same as hashlife), this would use a lot of memory. With a lot more possibilities than GoL per "entity" (NxN grid vs variables/applications/abstractions), there will also be a lot more hash misses.
There's also graph encodings like interaction nets, which have entirely local reduction behavior. Compared to de Bruijn indices, bindings are represented by edges, which makes them more applicable to hash consing. I once spent some time trying to add this kind of memoization but there are some further challenges involved, unfortunately.
Or just (flip .), which also allows ((flip .) .) etc. for further flips.
In Smullyan's "To Mock a Mockingbird", these combinators are described as "cardinal combinator once/twice/etc. removed", where the cardinal combinator itself defines flip.
With fixed point recursion, this allows for very tiny definitions of IFS fractals. For example, fractals like the Sierpinski triangle/carpet only require ~50 bit of binary lambda calculus [1] [2]!
[0]: https://text.marvinborner.de/2024-03-25-02.html
[1]: https://lambda-screen.marvinborner.de/?term=ERoc0CrYLYA%3D
[2]: https://lambda-screen.marvinborner.de/?term=QcCqqttsFtsI0OaA