And the best path to a benevolent AI is to do what? The difficulty here is that making an AGI benevolent is harder than making an AGI with unpredictable moral values.
Do we have reason to believe that giving the ingredients of AGI out to the general public accelerates safety research faster than capabilities research?
Do we have reason to believe that giving the ingredients of AGI out to the general public accelerates safety research faster than capabilities research?