I would say dense code tends to help code reviews. It just is a bit unintuitive to spend minutes looking at a page of code when you are used to take a few seconds in more verbose languages.
I find it also easier to just grab the code and interactively play with it compared to do that with 40 pages of code.
FIXAPL is an interesting spin on APL without overloading on arity.
Many array languages overload glyphs on arity so they basically behave depending on if you call them with 1 argument (in "monadic form") vs 2 arguments ("dyadic form")
monadic: G A1
dyadic: A1 G A2
where G is the glyph and AN are arguments
The overloading can lead to confusion (but is also interesting in its own way because you can reduce the number of glyphs in the language surface).
That overloading is I would say also one of the reasons array languages might not be as approachable and one aspect of the 'difficult to read' argument.
Maybe even more important: avoiding overloading on arity helps with composition (I still have to dig into this deeper).
This is a bit like saying stop using Ubuntu, use Debian instead.
Both llama.cpp and ollama are great and focused on different things and yet complement each other (both can be true at the same time!)
Ollama has great ux and also supports inference via mlx, which has better performance on apple silicon than llama.cpp
I'm using llama.cpp, ollama, lm studio, mlx etc etc depending on what is most convenient for me at the time to get done what I want to get done (e.g. a specific model config to run, mcp, just try a prompt quickly, …)
> This is a bit like saying stop using Ubuntu, use Debian instead.
Not really, because Ubuntu has always acknowledged Debian and explicitly documented the dependency:
> Debian is the rock on which Ubuntu is built.
> Ubuntu builds on the Debian architecture and infrastructure and collaborates widely with Debian developers, but there are important differences. Ubuntu has a distinctive user interface, a separate developer community (though many developers participate in both projects) and a different release process.
> Both llama.cpp and ollama are great and focused on different things and yet complement each other
According to the article, ollama is not great (that’s an understatement), focused on making money for the company, stealing clout and nothing else, and hardly complements llama.cpp at all since not long after the initial launch. All of these are backed by evidence.
You may disagree, but then you need to refute OP’s points, not try to handwave them away with a BS analogy that’s nothing like the original.
They might not use the word, but the behavior they describe is evil:
"
This isn’t a matter of open-source etiquette, the MIT license has exactly one major requirement: include the copyright notice. Ollama didn’t.
The community noticed. GitHub issue #3185 was opened in early 2024 requesting license compliance. It went over 400 days without a response from maintainers. When issue #3697 was opened in April 2024 specifically requesting llama.cpp acknowledgment, community PR #3700 followed within hours. Ollama’s co-founder Michael Chiang eventually added a single line to the bottom of the README: “llama.cpp project founded by Georgi Gerganov.”
"
Let's say I have a bunch of objects (e.g. parquet) in R2, can the agent mount them? Or how do I best give the agent access to the objects? HTTP w/ signed urls? Injecting the credentials?
Dynamic Workers don't have a built-in filesystem, but you can give them access to one.
What you would do is give the Worker a TypeScript RPC interface that lets it read the files -- which you implement in your own Worker. To give it fast access, you might consider using a Durable Object. Download the data into the Durable Object's local SQLite database, then create an RPC interface to that, and pass it off to the Dynamic Worker running on the same machine.
See also this experimental package from Sunil that's exploring what the Dynamic Worker equivalent of a shell and a filesystem might be:
At first I was trying to figure out why the parent comment was getting downvoted, then I read the last line. Yeesh, ya, you don't need to "learn" to sharpen, just get one of those pull-throughs. They is a minuscule learning curve with with it. It doesn't do the best sharpening job but as a particularly YouTuber once said: "The best sharpener is the one you will use."
People don't want to do it and they don't want to learn to do it. It's easier for them to buy a new knife. They're not expensive. Maybe keep the old one for garage stuff and gardening.
A new knife might not be expensive, but it's a new thing that has to be produced, and packaged, and shipped, and stored, and so on. Just keep your old stuff in shape, people.
reply