>It doesn't have to be perfect, just better than humans.
I have a different opinion on this.
Humans don’t like uncertainty. We like to feel like our mental model of reality can predict future outcomes. When it doesn’t, we get very uneasy. It’s why we don’t like dealing with erratic humans.
Part of the problem with AI is it’s lack of interpretability. People aren’t going to want to interact with AI if they can’t intuit what it will do, even if you can show it’s statistically better. The performance barrier is going to be much higher than just a little better than humans. We don’t have that limitation when dealing with people because we can more easily infer their goals and actions.
Thinking that being a little better than humans is the threshold is a rational decision. But human trust is often irrational. The latter often drives politics which can regulate AI into a corner.
Better than humans is not really meaningful as human skills have a very wide range and they can even vary depending on the circumstances and available resources.
Also, on average is not a great target either, sometimes it makes sense, but there are plenty of examples where we definitely don't want more average work.
I have a different opinion on this.
Humans don’t like uncertainty. We like to feel like our mental model of reality can predict future outcomes. When it doesn’t, we get very uneasy. It’s why we don’t like dealing with erratic humans.
Part of the problem with AI is it’s lack of interpretability. People aren’t going to want to interact with AI if they can’t intuit what it will do, even if you can show it’s statistically better. The performance barrier is going to be much higher than just a little better than humans. We don’t have that limitation when dealing with people because we can more easily infer their goals and actions.
Thinking that being a little better than humans is the threshold is a rational decision. But human trust is often irrational. The latter often drives politics which can regulate AI into a corner.