Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It'll also add extra fingers to human hands. Presumably that's not because of DEI guardrails about polydactyly, right?

The current state of the art in AI gets things wrong regularly.



Sure, but this one is from Google adding a tag to make every image of people diverse, not AI randomness.


Am I missing something in the link demonstrating that, or is it conjecture?


If you look closely at the response text that accompanies many of these images, you'll find recurring wording like "Here's a diverse image of ... showcasing a variety of ethnicities and genders". The fact that it uses the same wording strongly implies that this is coming out of the prompt used for generation. My bet is that they have a simple classifier for prompts trained to detect whether it requests depiction of a human, and appends "diverse image showcasing a variety of ethnicities and genders" to the prompt the user provided if so. This would totally explain all the images seen so far, as well as the fact that other models don't have this kind of bias.


Have you bothered to look at all? Read the output of the model when asked about why it has the behaviour it does. Look at the plethora of images it generates that are not just historically inaccurate but absurdly so. It tells you "heres a diverse X" when you ask for X. Yet asking for pictures of Koreans generates only Asian people but prompts for Scots or French people in historical periods generate mostly non-white people. You're being purposefully obtuse, Google has had racism complaints about previous models, talks often about AI safety and avoiding 'bias'. You're trying to argue that it's more likely that the training data had an inherent bias against generating white people in images purely by chance?


It's been demonstrated on Twitter a few times, can't find a link handy


OpenAI has no problem showing accurate pictures. You know it's Google-induced bias, but feign ignorance.

If you ask for a picture of nazi soldiers it shouldn't have 60% Asian people like you say. You know you're wrong but instead of admitting it, you're moving the goalpost to "hands".

This entire thread is you being insincere.


https://twitter.com/altryne/status/1760358916624719938

Here's some corporate-lawyer-speak straight from Google:

> We are aware that Gemini is offering inaccuracies...

> As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously.


That doesn't back up the assertion; it's easily read as "we make sure our training sets reflect the 85% of the world that doesn't live in Europe and North America". Again, 1/4 white people is statistically what you'd expect.


Fuck, this is going to sound fucked up... but just because you have a 1/4 chance of getting a random white person from the globe, they generally tend to clump together. For example, you generally find a shitload of Asian people in Asia, white people in Europe, and African people in Africa, and Indian people in India.

Probably the only chance where you wouldn't expect this are in heavily colonized places like South Africa, Australia, and the Americas.


Sure, but I see three 200 responses and a 400 - not 1/4 white people as statistically expected.


This specific thing is a much more blatant class of error, and one that has been known to occur in several previous models because of DEI systems (e.g. in cases where prompts have been leaked), and has never been known to occur for any other reason. Yes, it's conceivable that Google's newer, beter-than-ever-before AI system somehow has a fundamental technical problem that coincidentally just happens to cause the same kind of bad output as previous hamfisted DEI systems, but come on, you don't really believe that. (Or if you do, how much do you want to bet? I would absolutely stake a significant proportion of my net worth - say, $20k - on this)


> has never been known to occur for any other reason

Of course it has. Again, these things regularly give humans extra fingers and arms. They don't even know what humans fundamentally look like.

On the flip side, humans are shitty at recognizing bias. This comment thread stems from someone complaining the AI only rarely generated white people, but that's statistically accurate. It feels biased to someone in a majority-white nation with majority-white friends and coworkers, but it fundamentally isn't.

I don't doubt that there are some attempts to get LLMs to go outside the "white westerner" bubble in training sets and prompts. I suspect the extent of it is also deeply exaggerated by those who like to throw around woke-this and woke-that as derogatories.


A very impressive display of crimestop you've got going in this thread. How did you end up like this?


> Of course it has. Again, these things regularly give humans extra fingers and arms. They don't even know what humans fundamentally look like.

> This comment thread stems from someone complaining the AI only rarely generated white people, but that's statistically accurate. It feels biased to someone in a majority-white nation with majority-white friends and coworkers, but it fundamentally isn't.

So the AI is simultaneously too dumb to figure out what humans look like, but also so super smart that it uses precisely accurate racial proportions when generating people (not because it's been specifically adjusted to, but naturally)? Bullshit.

> I don't doubt that there are some attempts to get LLMs to go outside the "white westerner" bubble in training sets and prompts. I suspect the extent of it is also deeply exaggerated by those who like to throw around woke-this and woke-that as derogatories.

You're dodging the question. Do you actually believe the reason that the last example in the article looks very much not like a man is a deep technical issue, or a DEI initiative? If the former, how much are you willing to bet? If the latter, why are you throwing out these insincere arguments?


Congratulations, here is your gold medal in mental gymnastics. Enough now.

It literally refuses to generate images of white people when prompted directly while not only happily obliging but only producing that specific race in all 4 results for all others. It’s discriminatory and based on your inability to see that, you may be too.


The AI will literally scold you for asking it to make white characters, and insists that you need to be inclusive and that it is being intentionally dishonest to force the issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: