Interesting how an intellectual movement claiming to know better than anyone else how development of an unpredictable technology might pan out over the course of years failed to predict how one decision would pan out over the course of a single weekend.
Perhaps there's some level of overconfidence at play from systems thinkers who overintellectualize their ability to conceptualize and extrapolate forward an impossibly complex system.
> Perhaps there's some level of overconfidence at play from systems thinkers who overintellectualize their ability to conceptualize and extrapolate forward an impossibly complex system.
You may well be on to something. I'd trust a cabal of science fiction writers more than that I would trust these self appointed governors of our collective future. They lack imagination, for starters.
Considering they get most of their doomsday predictions from science fiction I’d say that would be a smart bet. Why get it second hand, just go right to the source.
No, I didn't have him in mind. But he did have imagination. How many people do you know that can say that they founded a church that is even crazier than the ones that were already out there?
Wasn't that a bet with Heinlein? His entry was "Stranger in a Strange Land" which having a human raised by Martians as the main person was a bit better
Oh and the fosterites were poking fun at Scientology, Mormons, and proto megachurch People.
I never said I didn't like them. I said they have asterisks behind their names indicating that putting them on a board of ethics may not be the best idea.
Sorry that Heinlein one is paywalled I'll look further.
I've read all his stuff, even the horrible mess of the last books and posthumous thing.
He was completely into turning into a woman even early on much less the book where he had his brain put into a secretary who died in an accident and went into the ramifications.
I say he as Lazarus Long and the rest were him.
Got it. Had to re-read thread, you were responding to someone else that put these authors on a pedestal, to be trusted more than corporate leaders. And you were just saying, as good as the books were, they aren't saints either.
I'm not sure we want Orson Scott Card deciding the future of humanity given his outspoken racist and homophobic beliefs, not to mention his warnings about Obama raising a secret army to become the next Hitler.
I mean, that's essentially the thesis of the notkilleveryoneist position: We don't know how to control powerful agents, and we need to pause AI development in order to figure out.
>Perhaps there's some level of overconfidence at play from systems thinkers who overintellectualize their ability to conceptualize and extrapolate forward an impossibly complex system.
Actually this is more or less the point that Eliezer Yudkowsky makes in this essay about the need for caution in AI development:
I doubt overconfidence is a problem specific to effective altruism. In any case, any good machine learning engineer knows that a dataset with only a single data point is essentially worthless -- even if we grant the premise that the board took the wrong action given the information they had available to them at the time.
It's ironic how they'd have been more successful if they had followed recommendations from their own LLM [1] (or Bard [2]).
Of course the recommendations are not that novel; that's CEO succession planning 101. But I guess none of those four have done any large succession planning, and were clearly out of their depth.
> how development of an unpredictable technology might pan out
> how one decision would pan out
I'm not sure what point you are making here. Are you trying to say "see, the AI not-kill-everyone-ists couldn't predict the future even in the short term, therefore we shouldn't put much credence into the the idea that the specific examples of AI doom they have given will happen"?
Or are you trying to imply that the idea of AI doom as a whole is bunk, because we can't predict the future... therefore everything will be fine...?
Perhaps there's some level of overconfidence at play from systems thinkers who overintellectualize their ability to conceptualize and extrapolate forward an impossibly complex system.