Hacker Newsnew | past | comments | ask | show | jobs | submit | tosh's commentslogin


I would say dense code tends to help code reviews. It just is a bit unintuitive to spend minutes looking at a page of code when you are used to take a few seconds in more verbose languages.

I find it also easier to just grab the code and interactively play with it compared to do that with 40 pages of code.


FIXAPL is an interesting spin on APL without overloading on arity.

Many array languages overload glyphs on arity so they basically behave depending on if you call them with 1 argument (in "monadic form") vs 2 arguments ("dyadic form")

monadic: G A1

dyadic: A1 G A2

where G is the glyph and AN are arguments

The overloading can lead to confusion (but is also interesting in its own way because you can reduce the number of glyphs in the language surface).

That overloading is I would say also one of the reasons array languages might not be as approachable and one aspect of the 'difficult to read' argument.

Maybe even more important: avoiding overloading on arity helps with composition (I still have to dig into this deeper).


This is a bit like saying stop using Ubuntu, use Debian instead.

Both llama.cpp and ollama are great and focused on different things and yet complement each other (both can be true at the same time!)

Ollama has great ux and also supports inference via mlx, which has better performance on apple silicon than llama.cpp

I'm using llama.cpp, ollama, lm studio, mlx etc etc depending on what is most convenient for me at the time to get done what I want to get done (e.g. a specific model config to run, mcp, just try a prompt quickly, …)


> This is a bit like saying stop using Ubuntu, use Debian instead.

Not really, because Ubuntu has always acknowledged Debian and explicitly documented the dependency:

> Debian is the rock on which Ubuntu is built.

> Ubuntu builds on the Debian architecture and infrastructure and collaborates widely with Debian developers, but there are important differences. Ubuntu has a distinctive user interface, a separate developer community (though many developers participate in both projects) and a different release process.

Source: https://ubuntu.com/community/docs/governance/debian

Ollama never has for llama.cpp. That's all that's being asked for, a credit.


OK. That says absolutely nothing about actual UX or anything that matters to most actual users (as opposed to argumentative HN ideologues).

> Both llama.cpp and ollama are great and focused on different things and yet complement each other

According to the article, ollama is not great (that’s an understatement), focused on making money for the company, stealing clout and nothing else, and hardly complements llama.cpp at all since not long after the initial launch. All of these are backed by evidence.

You may disagree, but then you need to refute OP’s points, not try to handwave them away with a BS analogy that’s nothing like the original.


I guess read the article before commenting?

The author points out that the Ollama people are evil.

So it is more like saying "Stop using SCO Unix, use Linux instead".


Where do they use the term "evil"?

In the gaps between the tops of the lines and the bottoms of the other lines ;)

They might not use the word, but the behavior they describe is evil:

" This isn’t a matter of open-source etiquette, the MIT license has exactly one major requirement: include the copyright notice. Ollama didn’t.

The community noticed. GitHub issue #3185 was opened in early 2024 requesting license compliance. It went over 400 days without a response from maintainers. When issue #3697 was opened in April 2024 specifically requesting llama.cpp acknowledgment, community PR #3700 followed within hours. Ollama’s co-founder Michael Chiang eventually added a single line to the bottom of the README: “llama.cpp project founded by Georgi Gerganov.” "


There isn't much you can do with Ollama models besides saying good morning.

the original implementations of k were all proprietary

there are a few open source implementations as well by now

https://wiki.k-language.dev/wiki/Running_K


Which llm is best at driving DuckDB currently?

DuckDB exposes Postgres SQL, and most coding LLMs have been trained on that.

Of the small models I tested, Qwen 3.5 is the clear winner. Going to larger LLMs, Sonnet and Opus lead the charts.


It used to be possible to type immediately while the page is loading and have all key presses end up in the input field.

Why run this check before user can type?

Why not run it later like before the message gets sent to the server?


I would argue it is an anti-pattern and irritating the core audience they want to reach with Claude Code


Let's say I have a bunch of objects (e.g. parquet) in R2, can the agent mount them? Or how do I best give the agent access to the objects? HTTP w/ signed urls? Injecting the credentials?


Dynamic Workers don't have a built-in filesystem, but you can give them access to one.

What you would do is give the Worker a TypeScript RPC interface that lets it read the files -- which you implement in your own Worker. To give it fast access, you might consider using a Durable Object. Download the data into the Durable Object's local SQLite database, then create an RPC interface to that, and pass it off to the Dynamic Worker running on the same machine.

See also this experimental package from Sunil that's exploring what the Dynamic Worker equivalent of a shell and a filesystem might be:

https://www.npmjs.com/package/@cloudflare/shell


You don't have to throw a chef's knife away when it becomes dull, you just sharpen it.


At first I was trying to figure out why the parent comment was getting downvoted, then I read the last line. Yeesh, ya, you don't need to "learn" to sharpen, just get one of those pull-throughs. They is a minuscule learning curve with with it. It doesn't do the best sharpening job but as a particularly YouTuber once said: "The best sharpener is the one you will use."


People don't want to do it and they don't want to learn to do it. It's easier for them to buy a new knife. They're not expensive. Maybe keep the old one for garage stuff and gardening.


I have one of these for travel: https://store.177milkstreet.com/products/suehiro-for-milk-st...

All you have to do is run the knife through it a few times for a decent sharpen. No power, no effort, no skill required.


A new knife might not be expensive, but it's a new thing that has to be produced, and packaged, and shipped, and stored, and so on. Just keep your old stuff in shape, people.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: