Hacker Newsnew | past | comments | ask | show | jobs | submit | gcr's commentslogin

I liked it! Qntm's prose is what hooked me.

If you'd like a representative sample, check out the previous version which remains available on the SCP wiki: https://scp-wiki.wikidot.com/antimemetics-division-hub

As part of qntm's book deal, the prose in the book was gently revised to change names etc. Some chapters were reordered for improved flow.


The SQLite documentation says in strong terms not to do this. https://sqlite.org/howtocorrupt.html#_filesystems_with_broke...

See more: https://sqlite.org/wal.html#concurrency


They tell you to use a proper FS, which is largely orthogonal to containerization.

WAL relies on shared memory, so while a proper FS is necessary, it isn't going to help in this case.

Why does it not help if both containers can mmap the same -shm file?

Shared memory across containers is a property of a containerization environment, not a property of a file system, "proper" or not.

It's a property of the filesystem, docker does not virtualize fs.

btw nfs that is mentioned here is fine in sync mode. However that is slow.

does anyone have pointers to similar articles that talk about GPU history?

One example is "No graphics API" by Sebastian Aaltonen shared here 3 months ago, which is a great tour de force of graphics stack innovations through contrasting the history of OpenGL/Vulkan and WebGPU/Metal development: https://news.ycombinator.com/item?id=46293062 Because it requires an in-depth understanding of the shader pipeline, the article touches on significant graphics cards of the era. I'd love to see more about that!


If you like brutalism, you might also enjoy the Quake Brutalist Map Jam 3, which released last month: https://www.slipseer.com/index.php?resources/quake-brutalist...

My favorite map is ‘One Need Not Be a House’ by Robert Yang, which was inspired by Louis Kahn's "brick brutalism" masterpieces in Bangladesh and India, as well as contemporary level design like The Silent Cartographer. The artist writes about their process on their blog post, https://www.blog.radiator.debacle.us/2026/01/one-need-not-be...

The map jam is standalone and uses custom assets so you don’t need a copy of Quake to enjoy it. Check the website for the ‘standalone’ variant.

Sorry for derailing! Cool laptop stand!


Neat! I was big into Quake years ago. This looks like something I could waste a weekend on.

Are these all single-player maps? Are there any that are designed for (or would at least be suitable for) 1-4 player deathmatch?


Just finished reading Masters of Doom crazy Quake is still a thing today

I do really like the fast pace of Doom Eternal and Dark Ages which you can see here I think


Was just gonna say this is a great accessory to put your computer on while playing QBJ3!

Yang also regularly writes really interesting blog posts, mostly around game design. Very much recommend keeping tabs on him.

agreed! i was reading his posts this morning on the subway and he's now a part of my RSS reader :-)

See also: “I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me” by Marcus Olang', https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li...

For what it’s worth, Pangram reports that Marcus’ article is 100% LLM-written: https://www.pangram.com/history/640288b9-e16b-4f76-a730-8000...


In theory, wouldn't be too hard be to settle the question if whether he used ChatGPT to write it: get Olang to write a few paragraphs by hand, then have people judge (blindly) if it's the same style as the article. Which one sounds more like ChatGPT.

The times I've written articles, and those have gone through multiple rounds of reviews (by humans) with countless edits each time, before it ends up being published, I wonder if I'd pass that test in those cases. Initial drafts with my scattered thoughts usually are very different from the published end results, even without involving multiple reviewers and editors.

When people judge blindly, the are more likely to think the human is the AI and the AI is the human.

73% judged GPT 4.5 (edit: had incorrectly said 4o before)to be the human.

https://arxiv.org/abs/2503.23674

Not only are people bad at judging this, but are directionally wrong.


There is research showing the contrary that is far more convincing:

> Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such “expert” annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization.

https://arxiv.org/html/2501.15654v2


Great find, I've submitted this preprint as a standalone item: https://news.ycombinator.com/item?id=47678270

For what it’s worth, Pangram thinks this article is fully human-written: https://www.pangram.com/history/f5f68ce9-70ac-4c2b-b0c3-0ca8...

The AI writing detectors are very unreliable. This is important to mention because they can trigger in the opposite direction (reporting human written text as AI generated) which can result in false accusations.

It’s becoming a problem in schools as teachers start accusing students of cheating based on these detectors or ignore obvious signs of AI use because the detectors don’t trigger on it.


Then pangram isn't very good, because that article is full of Claude-isms.

> because that article is full of Claude-isms

Not sure how I feel about the whole "LLMs learned from human texts, so now the people who helped write human texts are suddenly accused of plagiarizing LLMs" thing yet, but seems backwards so far and like a low quality criticism.


Real talk. You're not just making a good point -- you're questioning the dominant paradigm

Horrible

I'm sure some human writers would write:

> The specification forces this question on every path through the IMU mode-switching code. A reviewer examining BADEND would see correct, complete cleanup for every resource BADEND was designed to handle.

> The specification approaches from the other direction: starting from LGYRO and asking whether any paths fail to clear it.

> *Tests verify the code as written; a behavioural specification asks what the code is for.*

However this is a blog post about using Claude for XYZ, from an AI company whose tagline is

"AI-assisted engineering that unlocks your organization's potential"

Do you really think they spent the time required to actually write a good article by hand? My guess is that they are unlocking their own organizations potential by having Claude writes the posts.


> Do you really think they spent the time required to actually write a good article by hand?

Given I'm familiar with Juxt since before, used plenty of their Clojure libraries in the past and hanged out with people from Juxt even before LLMs were a thing, yes, I do think they could have spent the time required to both research and write articles like these. Again, won't claim for sure I know how they wrote this specific article, but I'm familiar with Juxt enough to feel relatively confident they could write it.

Juxt is more of a consultancy shop than "AI company", not sure where you got that from, guess their landing page isn't 100% clear what they actually does, but they're at least prominent in the Clojure ecosystem and has been for a decade if not more.


Your guess is worth what you paid for it.

Is it possible for a tool to know if something is AI written with high confidence at all? LLMs can be tuned/instructed to write in an infinite number of styles.

Don't understand how these tools exist.


The WikiEDU project has some thoughts on this. They found Pangram good enough to detect LLM usage while teaching editors to make their first Wikipedia edits, at least enough to intervene and nudge the student. They didn’t use it punatively or expect authoritative results however. https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipe...

They found that Pangram suffers from false positives in non-prose contexts like bibliographies, outlines, formatting, etc. The article does not touch on Pangram’s false negatives.

I personally think it’s an intractable problem, but I do feel pangram gives some useful signal, albeit not reliably.


It has Claude-isms, but it doesn't feel very Claude-written to me, at least not entirely.

What's making it even more difficult to tell now is people who use AI a lot seem to be actively picking up some of its vocab and writing style quirks.


Pangram has a very low false positive rate, but not the best false negative rate: https://www.pangram.com/blog/third-party-pangram-evals

You sound like a flat earther and a moon landing denier combined.

Pangram doesn't reliably detect individual LLM-generated phrases or paragraphs among human written text.

It seems to look at sections of ~300 words. And for one section at least it has low confidence.

I tested it by getting ChatGPT to add a paragraph to one of my sister comments. Result is "100% human" when in fact it's only 75% human.

Pangram test result: https://www.pangram.com/history/1ee3ce96-6ae5-4de7-9d91-5846...

ChatGPT session where it added a paragraph that Pangram misses: https://chatgpt.com/share/69d4faff-1e18-8329-84fa-6c86fc8258...


This is useful, thanks! TIL

So you're saying Pangram isn't worth much?

For the record, Pangram reports that 100% of this post is likely AI generated: https://www.pangram.com/history/0c785fe7-13b0-4f00-8cd2-b359...

The author posted about “AI slop is eating the world” a couple months ago: https://news.ycombinator.com/item?id=42167020

I wonder what changed their mind.


Author here. I did not use AI to write this essay. I write in apple notes and then move to an old app I use called Hemingway that I've used for years.

I've gotten this a lot on the 3 essays I have up, so I avoided Hemingway entirely on the latest one I posted today https://www.terrygodier.com/body-language - I left it significantly more wordy and less edited. I hope someday these sort of comments will ease up a little bit, it's quite disheartening, even if I understand the suspicion and where it's coming from. There is something interesting there, in the way that AI has caused me to alter the way I write to avoid being labeled as AI.

Also, here is a literal blog post that I also wrote without AI about someone using AI to copy my app, which has my entire AI philosophy in it: https://blog.terrygodier.com/2026/03/22/on-ai-and-prior-art....


> Author here. I did not use AI to write this essay.

Maybe you did. Maybe you didn't. It's your word vs. theirs.

But one thing that is undeniable is that your article reads very much like AI-generated text. While reading it, I couldn't help thinking how ironic it is to write about the virtues of simpler devices using something that is obviously an AI-generated article.

The Pangram report doesn't help your case either: https://www.pangram.com/history/f733dac6-a23f-480e-b18a-6794... (100% AI Generated)


I am shocked it resonated with readers here so universally. It was well-presented visually, but genuinely miserable to read with all the worst tells of AI writing. It contained two or three actual sentences of content with intense repetition, obnoxious signposting, and disjoint "what the fatcats don't want you to know" framing throughout. "Nobody in a position of power is saying this. The reason is simple: They sold you the condition. Now they sell you the treatment." The single worst thing I've read on Hacker News this year.

“Low hum” was the clincher for me. Chatbots just love to talk about things humming.

Aren't AI detectors almost exclusively terrible at their job, though? I wouldn't put a lot of weight on that.

Is it possible that Pangram makes mistakes?

Oh certainly, but this particular article also reads very strongly like LLM output to me.

Counterpoint: everything creates chores. That’s the nature of things. Having to re-pair headphones is like having to clean dishes. Clearing out inboxes is like dusting.

You don’t have to do them, but the author correctly points out that cruft accumulates when you don’t.

Maybe some chores are hardware chores (showering, dishes, laundry) and some are software chores (updates, pairing, EULA screens), and software chores are BAD while hardware chores are GOOD. I know I broadly prefer software chores because they’re easier to resolve, even though they don’t let me work with my hands.

Yes, software chores have increased in the past century. Yes, software chores are human-created or company-created. You know what else has increased? The role of software in our lives, and the number of retail companies the average human interacts with!


"Betrayal" also isn't the word that GT used.

It seems like an accurate paraphrase in the context of other known news about Delve, and it was shorter than "We trusted them to not do something intentionally fraudulent and then lie to us, and I believe they knowingly violated that trust by doing something intentionally fraudulent and lying to us."

What if you’re a cofounder or founding engineer and the company hasn’t raised yet?

Unlike in the article where a contractor was promised payment and no payment was made, the cofounder here knows already that the company can’t pay until they raise funds, and has planned for this accordingly, by living off of personal savings or contract jobs. They also understand the risk they’ve taken on and are comfortable trading their time for possibly zero returns.

Thank you. (FWIW it was an earnest question)

You’re welcome!

Cofounders and founding engineers just don’t talk enough about how they model the financial risk of investing in a high risk asset like a startup, and how they make sure that they’re not taking on more risk than they can personally manage if things don’t work out. Approaches will vary by person and personal circumstances, so it’s also helpful to know what constraints they’re working with (like needing to house and feed their family).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: