Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If true, the most important thing for us is to find out how to construct a filter such that the bad realities are suppressed.




We can’t even agree on an objective version of reality and you want someone to determine which ones are “bad.”?


Everything must be objectively awesome in one of those realities. I'd pick that one.


I don't see why that should be any harder than the AI alignment problem




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: