Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting how an intellectual movement claiming to know better than anyone else how development of an unpredictable technology might pan out over the course of years failed to predict how one decision would pan out over the course of a single weekend.

Perhaps there's some level of overconfidence at play from systems thinkers who overintellectualize their ability to conceptualize and extrapolate forward an impossibly complex system.



> Perhaps there's some level of overconfidence at play from systems thinkers who overintellectualize their ability to conceptualize and extrapolate forward an impossibly complex system.

You may well be on to something. I'd trust a cabal of science fiction writers more than that I would trust these self appointed governors of our collective future. They lack imagination, for starters.


Considering they get most of their doomsday predictions from science fiction I’d say that would be a smart bet. Why get it second hand, just go right to the source.


Science fiction writers like L. Ron Hubbard?


No, I didn't have him in mind. But he did have imagination. How many people do you know that can say that they founded a church that is even crazier than the ones that were already out there?


Wasn't that a bet with Heinlein? His entry was "Stranger in a Strange Land" which having a human raised by Martians as the main person was a bit better

Oh and the fosterites were poking fun at Scientology, Mormons, and proto megachurch People.

And his intent wasn't to rip off or lie to anyone


Yes, it was.


Anthony Levandowski, to name one.


He founded a church?



WOTF? WTF??

Thank you for that, it had completely passed me by.


Good science fiction writers. Heinlein, Asimov, Clarke, even Orson Scott Card I'd trust more than today's politicians.


I have my doubts that those are good choices. But I would cautiously suggest Stephenson and Vinge.


All four of those have some asterisks after their name that you may want to become familiar with.


Please say what you mean.


If you insist:

- Heinlein

  https://www.spectator.com.au/2019/03/robert-a-heinlein-the-giant-of-sf-was-sexist-racist-and-certainly-no-stylist/
- Asimov

  https://www.publicbooks.org/asimovs-empire-asimovs-wall/
- Clarke

  https://www.vice.com/en/article/bjxp5m/we-asked-people-what-childhood-moment-shaped-them-the-most

  (nebulous, but credible)
- Orson Scott Card

  https://en.wikipedia.org/wiki/Orson_Scott_Card#Homosexuality_2
Sorry if that toppled any of your heroes.

I have book by all of them on my shelves but they're not necessarily saints.


None of them were my heroes, except card before he wrote Speaker for the Dead.

More importantly, if you don't like somebody, don't just hint at it- give us a link.


I never said I didn't like them. I said they have asterisks behind their names indicating that putting them on a board of ethics may not be the best idea.


By that standard nobody belongs on an board of ethics.


Fair enough. They were as ahead of their time as they were behind ours.


Sorry that Heinlein one is paywalled I'll look further. I've read all his stuff, even the horrible mess of the last books and posthumous thing.

He was completely into turning into a woman even early on much less the book where he had his brain put into a secretary who died in an accident and went into the ramifications. I say he as Lazarus Long and the rest were him.


Heinlein article: https://archive.md/ojvdU

archive.xxx is prblematic at best if you use Cloudflare 1.1.1.1 DNS, fine with other providers, Quad9, Google, etc.


Do you not read authors that don't have sparkly clean private lives?

What authors are approved for reading?

I want to make sure I fit into the perfect cookie cutter world views.


Who said I don't read them?


Got it. Had to re-read thread, you were responding to someone else that put these authors on a pedestal, to be trusted more than corporate leaders. And you were just saying, as good as the books were, they aren't saints either.


Who doesn't?


Philip K Dick for all the boards


I'm not sure we want Orson Scott Card deciding the future of humanity given his outspoken racist and homophobic beliefs, not to mention his warnings about Obama raising a secret army to become the next Hitler.


By the time Hubbard started his religion, he wasn't really able to be part of a cabal of SF authors. He started his own cabal.


To put it another way, how are they going to control a super intelligent AI when they can't even control Sam Altman.


I mean, that's essentially the thesis of the notkilleveryoneist position: We don't know how to control powerful agents, and we need to pause AI development in order to figure out.


You're making the case for the side you're critiquing


>Perhaps there's some level of overconfidence at play from systems thinkers who overintellectualize their ability to conceptualize and extrapolate forward an impossibly complex system.

Actually this is more or less the point that Eliezer Yudkowsky makes in this essay about the need for caution in AI development:

https://www.econlib.org/archives/2016/03/so_far_unfriend.htm... (see especially points A/B/C/D)

I doubt overconfidence is a problem specific to effective altruism. In any case, any good machine learning engineer knows that a dataset with only a single data point is essentially worthless -- even if we grant the premise that the board took the wrong action given the information they had available to them at the time.


It's ironic how they'd have been more successful if they had followed recommendations from their own LLM [1] (or Bard [2]).

Of course the recommendations are not that novel; that's CEO succession planning 101. But I guess none of those four have done any large succession planning, and were clearly out of their depth.

[1] https://chat.openai.com/share/e77bc868-fe27-4346-9c13-0908e8...

[2] https://g.co/bard/share/549edfddc624


> how development of an unpredictable technology might pan out

> how one decision would pan out

I'm not sure what point you are making here. Are you trying to say "see, the AI not-kill-everyone-ists couldn't predict the future even in the short term, therefore we shouldn't put much credence into the the idea that the specific examples of AI doom they have given will happen"?

Or are you trying to imply that the idea of AI doom as a whole is bunk, because we can't predict the future... therefore everything will be fine...?


Always has been!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: