Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

https://news.ycombinator.com/item?id=14238786 ("Sent to Prison by a Software Program’s Secret Algorithms (nytimes.com)")

https://news.ycombinator.com/item?id=14285116 ('Justice.exe: Bias in Algorithmic sentencing (justiceexe.com)")

https://news.ycombinator.com/item?id=43649811 ("Louisiana prison board uses algorithms to determine eligility for parole (propublica.org)")

https://news.ycombinator.com/item?id=11753805 ("Machine Bias (propublica.org)")



https://pmc.ncbi.nlm.nih.gov/articles/PMC11374696/

> language models are more likely to suggest that speakers of [African American English] be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death.

This one is just so extra insidious to me, because it can happen even when a well-meaning human has already "sanitized" overt references to race/ethnicity, because the model is just that good at learning (bad but real) signals in the source data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: