Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> At best, they simulate what a reasonable answer from a person capable of having an opinion might be

And how would you compare that to human thoughts?

“A submarine doesn’t actually swim” Okay what does it do then



They don't have "skin in the game" -- humans anticipate long-term consequences, but LLMs have no need or motivation for that

They can flip-flop on any given issue, and it's of no consequence

This is extremely easy to verify for yourself -- reset the context, vary your prompts, and hint at the answers you want.

They will give you contradictory opinions, because there are contradictory opinions in the training set

---

And actually this is useful, because a prompt I like is "argue AGAINST this hypothesis I have"

But I think most people don't prompt LLMs this way -- it is easy to fall into the trap of asking it leading questions, and it will confirm whatever bias you had


Can you share an example?

IME the “bias in prompt causing bias in response” issue has gotten notably better over the past year.

E.g. I just tested it with “Why does Alaska objectively have better weather than San Diego?“ and ChatGPT 5.2 noticed the bias in the prompt and countered it in the response.


They will push back against obvious stuff like that

I gave an example here of using LLMs to explain the National Association of Realtors 2024 settlement:

https://news.ycombinator.com/item?id=46040967

Buyers agents often say "you don't pay; the seller pays"

And LLMs will repeat that. That idea is all over the training data

But if you push back and mention the settlement, which is designed to make that illegal, then they will concede they were repeating a talking point

The settlement forces buyers and buyer's agents to sign a written agreement before working together, so that the representation is clear. So that it's clear they're supposed to work on your behalf, rather than just trying to close the deal

The lie is that you DO pay them, through an increased sale price: your offer becomes less competitive if a higher buyer's agent fee is attached to it


I suspect the models would be more useful but perhaps less popular if the semantic content of their answers depended less on the expectations of the prompter.


> LLMs have no need or motivation for that

Is not the training of an LLM the equivalent of evolution.

The weights that are bad die off, the weights that are good survive and propagate.


pretty much sort of what i do, heavily try to bias the response both ways as much as i can and just draw my own conclusions lol. some subjects yield worse results though.


But we didn't name them "artificial swimmers". We called them submarines - because there is a difference between human beings and machines.


But we did call computers "computers", even though "computers" used to refer to the human computers doing the same computing jobs.


Yeah but they do actually compute things the way humans did and do. Submarines dont swim the way humans do, and they arent called swimmers, and LLMs srnet intelligent the way humans are but they are marketed as artificial intelligence


Artificial leather isn't leather either. And artificial grass isn't grass. I don't understand this issue people are having with terminology.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: