They don't have "skin in the game" -- humans anticipate long-term consequences, but LLMs have no need or motivation for that
They can flip-flop on any given issue, and it's of no consequence
This is extremely easy to verify for yourself -- reset the context, vary your prompts, and hint at the answers you want.
They will give you contradictory opinions, because there are contradictory opinions in the training set
---
And actually this is useful, because a prompt I like is "argue AGAINST this hypothesis I have"
But I think most people don't prompt LLMs this way -- it is easy to fall into the trap of asking it leading questions, and it will confirm whatever bias you had
IME the “bias in prompt causing bias in response” issue has gotten notably better over the past year.
E.g. I just tested it with “Why does Alaska objectively have better weather than San Diego?“ and ChatGPT 5.2 noticed the bias in the prompt and countered it in the response.
Buyers agents often say "you don't pay; the seller pays"
And LLMs will repeat that. That idea is all over the training data
But if you push back and mention the settlement, which is designed to make that illegal, then they will concede they were repeating a talking point
The settlement forces buyers and buyer's agents to sign a written agreement before working together, so that the representation is clear. So that it's clear they're supposed to work on your behalf, rather than just trying to close the deal
The lie is that you DO pay them, through an increased sale price: your offer becomes less competitive if a higher buyer's agent fee is attached to it
I suspect the models would be more useful but perhaps less popular if the semantic content of their answers depended less on the expectations of the prompter.
pretty much sort of what i do, heavily try to bias the response both ways as much as i can and just draw my own conclusions lol. some subjects yield worse results though.
Yeah but they do actually compute things the way humans did and do. Submarines dont swim the way humans do, and they arent called swimmers, and LLMs srnet intelligent the way humans are but they are marketed as artificial intelligence
And how would you compare that to human thoughts?
“A submarine doesn’t actually swim” Okay what does it do then