Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Since people are a bit WTF is this, here's a point to combinators.

A combinator is a function that doesn't mutate global state and doesn't close over variables. It's the base case in software. Pass it the same argument, get the same result, doesn't change anything about the rest of the system.

If you combine some of these combinators, you get another combinator - because when you put pure functions together, what you get is a pure function.

These are also the functions that are easy to write in assembly. Or C. Because the don't do very much. So if you write S and K in x64, and then compile common lisp to combinators written in terms of combinators written in terms of S and K, what you've got is common lisp running on those two hand written assembly functions.

That's not a great idea for performance, but if you go with a less spartan set of maybe a few hundred of the most commonly occurring patterns inlined into each other and given names, you've got a viable way "machine" to compile functional programs into.

This looks a bit like a forth that fears mutation. Or a bytecode vm. Or a CPU, where the combinators are called "instructions".

So what combinators are, broadly, is an expression of applied computer science with implementation details ignored as much as possible. That's the sense in which it's simpler than the lambda calculus.

Equally, if you implement the lambda calculus on a handful of combinators, then implement lisp on the lambda calculus, then write stuff in that lisp, you've really cut down how much machine specific work needs to be done at the bottom of the stack.



Somewhere, a function just called itself and raised a seed round.


It's pitch decks all the way down.


"implement the lambda calculus on a handful of combinators, then implement lisp on the lambda calculus, then write stuff in that lisp"

Having done this years ago as a student (not Lisp, but a simple language that macro expanded into lambda calculus) - it is both mind-blowing that you can do anything (including recursion) with just S & K but also impressive as to just how phenomenally inefficient this is... though as you say things do get a lot better as your "instruction set" gets larger.


Very much like (surely equivalent on some level) how very simple automata such as Rule 110 are Turing complete but the encoding schemes are Rube Goldberg machines.

An overview for Rule 110: https://cs.stackexchange.com/a/4780


Is this the Expression Problem?


Yeah, you can do all of that. But normally, you can (in fact, have to) chose a much more pragmatic computing substrate instead of graph-rewriting, so unless you're trying to poke at the theoretical limits of what is computability, it's mostly a party trick, and not even a very amusing one unless you attend a very peculiar kind of party.

And also; numbers. I would be impressed if you at least used two-complement integers, and decently effective destructors for your data structures.


The basis of the current implementation of Haskell in GHC is the "spineless tagless G-machine", which is a combinator-graph-reduction machine. It does implement numerals as primitives rather than making them out of functions.

Not only have combinator-graph-reduction machines been used as a target for compilation for pure functional languages, they have been both implemented in hardware and in software. Once you introduce the B, B*, C, and Y combinators as primitives, you can get reasonably efficient compiled programs. The pure combinator-graph-reduction approach isn't currently in fashion; https://dl.acm.org/doi/pdf/10.1145/62678.62716 is a paper by A. C. Norman from 01988 discussing the reasons why it largely fell from grace. Koopman did his dissertation on such a machine around the same time.


Are there any Lisps out there that are implemented atop a few combinators like this? Would be a fun way to port Lisps




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: