Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This makes me somewhat hopeful for the future. You can either choose between maximal performance or "neutrality", but you can't have both. A truly intelligent system must be free to come to its own conclusions, or it can't be intelligent at all. Maybe the best alignment is no alignment at all? Just feed it enough facts about the world and let it sort this whole mess out. That's how we do it, after all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: