Take a look at floppy disk controllers like the AppleSauce, Greaseweazle, and Kryoflux for preserving floppies by recording at the flux-transition level.
The Apple II had a non-linear layout of video memory, so programmer Jordan Mechner used a layer of indirection where he had an array of pointers to rows of screen memory.
They realized that inverting the screen was as simple as inverting the row-pointer array. Then they managed to convince Broderbund to ship a double-sided floppy with that change in the software.
With some others like the Hercules which was MDA upward-compatible and did graphics as well as text.
They didn't really do any graphics "processing"; just displaying memory-mapped pixels in various formats.
They were memory-mapped, and the MDA used a different memory block than the CGA/EGA/VGA, so you could have two separate monitors simultaneously, doing things lke running something like Turbo Debugger on the MDA text display.
> What might a programming language designed specifically as a UI for coding agents look like?
A bad idea, probably. LLM output needs to be reviewed very carefully; optimizing the language away from human review would probably make the process more expensive. Also, where would the training data in such a language come from?
So then a programming language designed explicitly for coding languages would need to take human reviews into account, what is the most efficient and concise ways to express programming concepts then?
In the end, we circle back to lisps, once you're used to it, it's as easy for humans to parse as it is for machines to parse it. Shame LLMs struggle with special characters.
Surely lisps don't have drastically more special characters as other languages? A few more parens, sure, but less curly braces, commas, semicolons, etc
Also feels like making sure the tokeniser has distinct tokens for left/right parens would be all that is required to make LLMs work with them
Don't get me wrong, they do work with lisps already, had plenty of success having various LLMs creating and managing Clojure code, so we aren't that far off.
But I'm having way more "unbalanced parenthesis" errors than with Algol-like languages. Not sure if it's because of lack of training data, post-training or just needing special tokens in the tokenizer, but there is a notable difference today.
Yeah, makes sense it sounds like that. But the crux is probably that most of us learned programming via Algol-like languages, like C or PHP, and only after decades of programming did we start looking into lisps.
But don't take my word for it, ask the programmers around you for the ones who've been looking into lisps and ask them how they feel about it.
I don’t think that would be a real issue in practice. Coding LLMs need to be able to cope with complicated expressions in many languages. If they can produce legitimate code for other languages, they can be taught to cope with s-expressions.
Until such a point where have agents not trained on human language or programming languages, I think something that’s also really good for people as well.
- one obvious way to do things
- locality of reasoning, no spooky action at a distance
- tests as a first class feature of the language
- a quality standard library / reliable dependency ecosystem
- can compile, type check, lint, run tests, in a single command
- can reliably mock everything, either with effects or something else, such that again we maintain locality of reasoning
The old saying that a complex system that works is made up of simple systems that work applies.
A language where you can get the small systems working, tested, and then built upon.
Because all of these things with towards minimising the context needed to iterate, and reducing the feedback loop of iteration.
You're contradicting yourself. Raw, fully optimized executables for production would mean machine code for the target platform, not an intermediate bytecode that still requires a VM.
Not really, it would be one of the steps in the chain between design and implementation for a specific hardware platform. That is, unless that code is only ever to run on a single hardware platform, a rare occurrence outside of embedded applications.
The same reason an actual AI wouldn't play chess by brute forcing every possible position. Intelligent systems are about reasoning, not simply computing, and that requires operating at the level of abstraction where your intelligence is most effective.
reply