Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Basically any frontier model right now and ask it any politically divisive fact that may upset certain classes of people.
 help



For example?

Because for Deepseek is pretty straightforward censorship.



It doesn't look like self censoring at all - basically you want the default behavior of llms to gamble on the ethnicity of someone based on how they look.

Grok used a book as a reference.

It's not like ethnicity is a fact you infer from looking at someone.

Now ask Deepseek about what happened in Tiananmen Square and watch what censorship actually looks like.

It literally knows the facts, but then there's a layer that prevents it from stating the facts.

That's censorship.

It's not an opinion, it's not a choice when facing a gradient, it's just an historical known fact.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: