> Now imagine an ASI that isn't confined to the Go board, but operating out in the world.
I don't think it's reasonable at all to look at a system's capability in games with perfect and easily-ingested information and extrapolate about its future capabilities interacting with the real world. What makes you confident that these problem domains are compatible?
That’s not what I was saying at all. I was using Go as an example of what the experience of being helplessly outclassed by a superior intelligence is like: you are losing and you don’t know why and there’s nothing you can do.
I don't think it's reasonable at all to look at a system's capability in games with perfect and easily-ingested information and extrapolate about its future capabilities interacting with the real world. What makes you confident that these problem domains are compatible?