Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I can't help but wonder who's pushing this "AI safety" narrative, and why.

After watching Facebook evolve the last ~20 years, I think a bit of skepticism is warranted.



IMO, Facebook's problems (body image issues, fake news/clickbait) aren't caused by a genuine inability to control their "AI" algorithms[1]. The root cause is they prioritize profits over user well-being. Framing the issue as "AI safety" feels like blaming a slot machine's bytecode instead of the casino owners.

But I appreciate the reply. I'm heartened to learn that people (besides the NYT/WSJ) are genuinely concerned.

[1] I'd expect the algorithms are pretty similar to 'show me things like the ones my friends interacted with.' And if they really cared about addiction, they could just disable infinite scroll.


> The root cause is they prioritize profits over user well-being. Framing the issue as "AI safety" feels like blaming a slot machine's bytecode instead of the casino owners.

AI will have the same problem, except instead of sociopaths at the helm, it will be an actual computer.


Sociopaths will still be in charge of the computers though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: