We humans are very good at rejecting any information that doesn’t confirm our priors or support our political goals.
Like, if ChatGPT says (say) vaccines are good/bad, I expect the other side will simply attack and reject it as misinformation, conspiracy, and similar.
From what I can see, LLMs default to being sychophants; acting as if a sychophant was neutral is entirely compatible with the cognitive bias you describe.
While also noting that "neutral" is not well-defined, I agree. They will be used as if they were.