It doesn't look like self censoring at all - basically you want the default behavior of llms to gamble on the ethnicity of someone based on how they look.
Grok used a book as a reference.
It's not like ethnicity is a fact you infer from looking at someone.
Now ask Deepseek about what happened in Tiananmen Square and watch what censorship actually looks like.
It literally knows the facts, but then there's a layer that prevents it from stating the facts.
That's censorship.
It's not an opinion, it's not a choice when facing a gradient, it's just an historical known fact.