> language models are more likely to suggest that speakers of [African American English] be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death.
This one is just so extra insidious to me, because it can happen even when a well-meaning human has already "sanitized" overt references to race/ethnicity, because the model is just that good at learning (bad but real) signals in the source data.
https://news.ycombinator.com/item?id=14285116 ('Justice.exe: Bias in Algorithmic sentencing (justiceexe.com)")
https://news.ycombinator.com/item?id=43649811 ("Louisiana prison board uses algorithms to determine eligility for parole (propublica.org)")
https://news.ycombinator.com/item?id=11753805 ("Machine Bias (propublica.org)")