Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

https://pmc.ncbi.nlm.nih.gov/articles/PMC11374696/

> language models are more likely to suggest that speakers of [African American English] be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death.

This one is just so extra insidious to me, because it can happen even when a well-meaning human has already "sanitized" overt references to race/ethnicity, because the model is just that good at learning (bad but real) signals in the source data.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: