Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, but this one is from Google adding a tag to make every image of people diverse, not AI randomness.


Am I missing something in the link demonstrating that, or is it conjecture?


If you look closely at the response text that accompanies many of these images, you'll find recurring wording like "Here's a diverse image of ... showcasing a variety of ethnicities and genders". The fact that it uses the same wording strongly implies that this is coming out of the prompt used for generation. My bet is that they have a simple classifier for prompts trained to detect whether it requests depiction of a human, and appends "diverse image showcasing a variety of ethnicities and genders" to the prompt the user provided if so. This would totally explain all the images seen so far, as well as the fact that other models don't have this kind of bias.


Have you bothered to look at all? Read the output of the model when asked about why it has the behaviour it does. Look at the plethora of images it generates that are not just historically inaccurate but absurdly so. It tells you "heres a diverse X" when you ask for X. Yet asking for pictures of Koreans generates only Asian people but prompts for Scots or French people in historical periods generate mostly non-white people. You're being purposefully obtuse, Google has had racism complaints about previous models, talks often about AI safety and avoiding 'bias'. You're trying to argue that it's more likely that the training data had an inherent bias against generating white people in images purely by chance?


It's been demonstrated on Twitter a few times, can't find a link handy


OpenAI has no problem showing accurate pictures. You know it's Google-induced bias, but feign ignorance.

If you ask for a picture of nazi soldiers it shouldn't have 60% Asian people like you say. You know you're wrong but instead of admitting it, you're moving the goalpost to "hands".

This entire thread is you being insincere.


https://twitter.com/altryne/status/1760358916624719938

Here's some corporate-lawyer-speak straight from Google:

> We are aware that Gemini is offering inaccuracies...

> As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously.


That doesn't back up the assertion; it's easily read as "we make sure our training sets reflect the 85% of the world that doesn't live in Europe and North America". Again, 1/4 white people is statistically what you'd expect.


Fuck, this is going to sound fucked up... but just because you have a 1/4 chance of getting a random white person from the globe, they generally tend to clump together. For example, you generally find a shitload of Asian people in Asia, white people in Europe, and African people in Africa, and Indian people in India.

Probably the only chance where you wouldn't expect this are in heavily colonized places like South Africa, Australia, and the Americas.


Sure, but I see three 200 responses and a 400 - not 1/4 white people as statistically expected.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: