Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As well as that, I suspect the major AI companies are fearful of generating images of real people - presumably not wanting to be involved with people generating fake images of "Donald Trump rescuing wildfire victims" or "Donald Trump fighting cops".

Their efforts to add diversity would have been a lot more subtle if, when you asked for images of "British Politician" the images were recognisably Rishi Sunak, Liz Truss, Kwasi Kwarteng, Boris Johnson, Theresa May, and Tony Blair.

That would provide diversity while also being firmly grounded in reality.

The current attempts at being diverse and simultaneously trying not to resemble any real person seems to produce some wild results.



My takeaway from all of this is that alignment tech is currently quite primitive and relies on very heavy-handed band-aids.


I think that's a bit overly charitable.

Would it not be reasonable to also draw the conclusion that notion of alignment itself is flawed?


We're honestly just seeing generative algorithms failing at diversity initiatives as badly as humans for.

Forcing diversity into a system is an extremely tough, if not impossible, challenge. Initiatives have to be driven my goals and metrics, meaning we have to boil diversity down to a specific list of quantitative metrics. Things will always be missed when our best tool to tackle a moral or noble goal is to boil a complex spectrum of qualitative data to a subset of measurable numbers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: