Hacker Newsnew | past | comments | ask | show | jobs | submit | Bolwin's commentslogin

I just admit, I didn't expect this to make it to HN

AI written/edited front page.

A passion project and you can't even trust yourself to write the front page.

Made me lose interest.


Ah, so certain, and so wrong.

(Author here) All 19 edits to the file floppy.md over the past few months are in my local git repo, and all the people who proof-read different versions of it would be happy to attest. The site is statically generated from templates and content created by me.

The manifesto is written in my attempt at the same style as others I've read, like the one for Small Games (as in scope, not file size): https://twitter.com/gingerbeardman/status/196592155732860525...


The small games manifesto has no LLM tells.

If yours really is fully handwritten, then you've been reading so much AI writing you're mimicking a lot of its patterns, which is sad to see.


So you're now not so certain. That's progress, I guess. You seem to want to tell me what I've been doing and how I've been doing it, which is a bit odd.

Maybe you could look at using some AI assistance to provide some real 'useful' feedback to the OP about what made you lose interest, as your false confidence in some opaque 'AI-detection' heuristic is lazy and only harms individuals who are trying to find effective ways to share information.

The AI tells are quite literally what made me lose interest. They help me filter out dozens of AI slop posts daily so I'll be keeping them thanks. They're lazy by design. You can generate a good looking post in 30s, I can't afford to spend 5 mins reading each one on its merits.

> summarizing web pages

For summarizing creative writing, I've found Opus and Gemini 3 pro are still only okay and actively bad once it gets over 15K tokens or so.

A lot of long context and attention improvements have been focused on Needle in a Haystack type scenarios, which is the opposite of what summarization needs.


I think it's less about the extremity of evil and more about lacking the means to get rid of it in a more civil manner.

"when the game is rigged its justified to flip the table"

It is tho’ Naive not to

Interesting. Would be able to release it on api or weights so we can use it outside the context of your application?

@bolwin we will definately do that in future. Still in alpha and making updates and scaling it up but hopefully in next 3-6months we will have an API available

> The numbers came from the same project and the same prompt across versions.

I'm pretty sure the tester checked. If the request format is the same (which it is, given it uses the same as Anthropic's stable public API) and the same prompt/messages then bytes will correlate pretty well.


The prompt may be the same, but the project context would have have surely changed. User prompt itself is unlikely to be ~200KB.

Claude -p is allowed. They're not going to give you a feature then ban you for using it.

What they changed is that it now uses extra usage, which is charged at api rates


"claude -p" does not charge api rates by itself, I just ran "claude -p 'write hello world to foo.txt'", and it didn't.

What they changed is that if you have OpenClaw run 'claude -p' for you, that gets your account banned or charged API rates, and if they think your usage of 'claude -p' is maybe OpenClaw, even if it's not, you get charged API rates or banned.

It seems so silly to me. They built a feature with one billing rate, and the feature is a bash command. If you have a bad program run the bash command, you get billed at a different rate, if you have a good script you wrote yourself run it, you're fine, but they have literally no legitimate way to tell the difference since either way it's just a command being run.

The justification going around is that OpenClaw usage is so heavy that it impacts the service for other people, but like OpenClaw was just using the "claude code max" plan, so if they can't handle the usage the plan promises, they should be changing the plan.

If they had instead said "Your claude code max plan, which has XX quota, will get charged API rates if you consistently use 50% of your quota. The quota is actually a lie, it's just the amount you can burst up to once or twice a week, but definitely not every day" and just banned everyone that used claude code a lot, I wouldn't be complaining as much, that'd be much more consistent.


It only switches to charging API rates if some part of your prompt triggers their magic string detector. Lot of examples of that floating around where swapping "is" for "are" or whatever will magically allow the request against your subscription plan again.

I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.

It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.


Anthropic has two principles they're willing to stand behind, even when it costs them. That's not a lot, but OpenAI only has one principle: look out for number one.

Those are?

The idea that it's not okay to arm the military is a position of privilege. The ethical issues are around how the military chooses to use its abilities, not around giving them the tools to do their jobs. We're talking about folks who are willing to give their lives up for others. If you're not going to serve yourself you should at least be willing to help them live. This has nothing to do with whether or not you support the political uses of the military. If world war 3 breaks out and you are forced to serve, you may find yourself feeling differently.

Yes and... that's a position of privilege that anyone in the position should ethically take.

It's unfair to sweep provision of methods to the military under a "respect the service" catch-all justification.

Two things can simultaneously be true: (1) individuals serving in the military are making sacrifices (in terms of pay, family life, personal safety) that deserve respect and (2) the military as a political institution will amorally deploy whatever capabilities it has access to, to achieve political aims.

There's a reason the US stopped offensive chemical, biological warfare, and tactical nuclear device research and production -- effective capabilities will be used if they exist.


With respect to the weapons programs, I'm not a historian, but I was not under the impression that the US stopped development of these weapons unilaterally or out of good will. My understanding is that it was due to a mixture of not perceiving a need or use for the capabilities, along with formal or informal international cooperation eliminating the need for deterrence.

Just a couple of thoughts since it seems like the next issues in this space are rapidly arriving or already here.


As far as I've read the literature from the 60s and 70s, tactical nukes were eventually eliminated in order to assuage western Europe's concerns that large portions of their countries would be turned into irradiated wastelands for decades / centuries if war erupted between the US and USSR.

It was also the product of perceived overmatch on both sides -- the Soviets believed they had superior mass of armored formations (and they did), while the US and allies believed they had technological supremacy (and they did). Ergo, neither needed tactical nukes.

It didn't hurt that it helped both in the eyes of the then vehemently anti-nuclear European movements.

Offensive bio and chemical weapon limitation is a more nuanced decision.

In both cases, their primary use was either local mass lethality or terrain denial, neither of which were important in the then-gelling American doctrines of maneuver.

The sole use case they seemed viable for was industry denial (e.g. contaminate a high capital cost industrial center), a task at which strategic sized nuclear weapons were equally adept (and more easily stored). So, if you had to have strategic nuclear weapons for deterrence, and they were capable of the same task, why have fiddly bio and chemical weapons?

But in both cases there was also a constant radiant pressure of scientists and the public campaigning against them, and being unwilling to work on or tolerate them.

Absent that, who knows how history would have turned out? Normalization is a powerful opinion shifter.


I'd feel much better about supporting military actions of the people that are becoming part of that system if they exercised some fucking free will and not follow criminals in our government into wars that do not support our people, or our country. We have a serious problem in our government and it being connected in anyway with what is happening in that institution gives me great pause in believing in people of this country. People are stupid to not be fight this government tooth and nail.

Try two newlines between each one


That, or add 4 spaces before each line (renders as a <pre>).


Two spaces: https://news.ycombinator.com/formatdoc

It's for code though, not lists or bullet points.


Many ASR models already support prompts/adding your own terminology. This one doesn't, but full LLMs especially such expensive ones aren't needed for that.


A lot of them like Whisper are severely limited on context size for adding your own terminology

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: