Hacker Newsnew | past | comments | ask | show | jobs | submit | Terr_'s commentslogin

We also need to talk more about the third not-mutually-exclusive option: Legal liability if/when things go wrong.

It's often buried because the people making money dislike it, so much so that they will lobby the government to impose wide bans. Especially if:

* The ban makes somebody else pay most of the costs of protecting "the children" against their design-choices or business-model.

* The ban gives them a blanket pass for almost any exploitative design against adults or other acceptable targets.


They linked to the manual page, if you go to the root of the site, there's a link to a different domain which is the "Official authorised online store for all Cafelat items."

Even then, we're stuck with the root problem of LLM-based agents (i.e. the ones everyone is trying to use these days) being fundamentally untrustworthy and prone to going rogue.

> Precrime only works if they can separate signal from noise.

IMO a lot of these debates depend on implicit assumptions about the threat and how it operates. For example:

1. Lawful Evil: They care about good data and going after the "worst" offenders, even if I might disagree about what is "bad".

2. Lazy risk-averse evil: The data needs to give them something to justify the existence of the program, they'll go after whomever is convenient.

3. Cover-your-ass evil: The data archive exists to let them make a plausible case for someone they've already decided to persecute for other reasons.

4. Fraudulent evil: The data archive is just to make it easier to fabricate a fake reason to go after someone.

5. Blatant evil: The data doesn't matter because they can just do stuff to you by fiat.

Some of those groups would be hampered by noise, some would benefit from noise, and the last just won't care.


That reminds me of a sci-fi quote, where one of the main characters is discussing a murderous antagonist, putting their evil into a broader context:

> "He was just a little villain. An old-fashioned craftsman, making crimes one-off. The really unforgivable acts are committed by calm men in beautiful green silk rooms, who deal death wholesale, by the shipload, without lust, or anger, or desire, or any redeeming emotion to excuse them but cold fear of some pretended future. But the crimes they hope to prevent in that future are imaginary. The ones they commit in the present--they are real."

-- Shards of Honor (1986) by Lois McMaster Bujold


To me the what we wanted/got distinction is something like:

1. A kind of capital that is widely available, so that people could exercise control and agency with machines that do what you want them to do for your own needs.

2. A distribution tool controlled by mega-corporations as they decide what you should be able to see or have.


Related concept: Unaccountability machines [0] where the system (electronic or organizational) mainly exists to make things nobody's fault.

There's Discworld bit [1] that often comes to mind for me, where the protagonist is reading a press-release by a corporate communications monopoly

> The Grand Trunk’s problems were clearly the result of some mysterious spasm in the universe and had nothing to do with greed, arrogance, and willful stupidity. Oh, the Grand Trunk management had made mistakes—oops, “well-intentioned judgments which, with the benefit of hindsight, might regrettably have been, in some respects, in error”—but these had mostly occurred, it appeared, while correcting “fundamental systemic errors” committed by the previous management. No one was sorry for anything, because no living creature had done anything wrong; bad things had happened by spontaneous generation in some weird, chilly, geometric otherworld, and “were to be regretted.”

[0] https://press.uchicago.edu/ucp/books/book/chicago/U/bo252799...

[1] Going Postal by Terry Pratchett



> I apologized to it and told it to think like an LLM instead of the person I was treating it as.

It sounds like you didn't actually stop treating it like a person. Pareidolia is a helluva instinct.


It's a language model. Language is what it models. So you use language to move it into an advantageous state space. Dunno what you want from me, lol.

It's kind of a "code that gets the immediate result you want" versus "code that puts the developer in the right headspace for maintaining it" thing.

Ultimately you're not conversing with any real LLM, it's iterative document generation where humans perceive fictional characters in the output. If the text you contribute says "You're just an {Noun}" that's that's shaping the document output based on what got trained in relation to {Noun}.

Which may eventually backfire, when the (real) LLM gets trained on documents such as blog-posts like "The moment you realize the {Noun} is retarded."


Vibrio is also a prime suspect in unusual die-offs of sea-stars.

https://www.scientificamerican.com/article/vibrio-pectenicid...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: