Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On a more practical level, I would be interested in Terry's thoughts on the open letter Sam Altman co-signed stating that "mitigating the risk of extinction from AI should be a global priority," alongside risks like pandemics and nuclear war.

Do current AI tools genuinely pose such risks?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: