Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sometimes I feel like I'm losing my mind with this shit.

Am I to understand that a bunch of "experts" created a model, they surrounded the findings of that model with a fancy website, replete with charts and diagrams, that website suggests the possibility of some doomsday scenario, the headline of the website says "We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution." WILL be enormous. Not MIGHT be, they went on some of the biggest podcasts in the world talking about it, a physicist comes along and says yeah this is shoddy work, the clap back is "Well yeah it's an informed guess, not physics or anything"?

What was the point of the website if this is just some guess? What was the point of the press tour? I mean are these people literally fucking insane?



No, you're wrong. They wrote the story before coming up with the model!

In fact the model and technical work has basically nothing to do with the short story, aka the part that everyone read. This is pointed out in the critique, where titotal notes that a graph widely disseminated by the authors appears to be generated by a completely different and unpublished model.


https://ai-2027.com/research says that:

AI 2027 relies on several key forecasts that couldn't be fully justified in the main text. Below we present the detailed research supporting these predictions.

You're saying the story was written, then the models were created and the two have nothing to do with one another? Then why does the research section say "Below we present the detailed research supporting these predictions"?


Yes, that's correct. The authors themselves are being extremely careful (and, I'd argue, misleading) in their wording. The right way to interpret those words is "this is literally a model that supports our predictions".

Here is the primary author of the timelines forecast:

> In our website frontpage, I think we were pretty careful not to overclaim. We say that the forecast is our "best guess", "informed by trend extrapolations, wargames, ..." Then in the "How did we write it?" box we basically just say it was written iteratively and informed by wargames and feedback. [...] I don't think we said anywhere that it was backed up by straightforward, strongly empirically validated extrapolations.

> In our initial tweet, Daniel said it was a "deeply researched" scenario forecast. This still seems accurate to me, we spent quite a lot of time on it (both the scenario and supplements) and I still think our supplementary research is mostly state of the art, though I can see how people could take it too strongly.

https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...

Here is one staff member at Lightcone, the folks credited with the design work on the website:

> I think the actual epistemic process that happened here is something like:

> * The AI 2027 authors had some high-level arguments that AI might be a very big deal soon

> * They wrote down a bunch of concrete scenarios that seemed like they would follow from those arguments and checked if they sounded coherent and plausible and consistent with lots of other things they thought about the world

> * As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do, but is an important thing to pay attention to

> The right way to interpret the "timeline forecast" sections is not as "here is a simple extrapolation methodology that generated our whole worldview" but instead as a "here is some methodology that sanity-checked that our worldview is not in obvious contradiction to reasonable assumptions about economic growth"

https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...


This quote is kindof a killer for me: https://news.ycombinator.com/item?id=44065615 I mean if your prediction disagrees with your short-story, and you decide to just keep the story because changing the dates is too annoying, how seriously should anyone take you?


Ok, yeah, I take the point that one illustration did not obviously precede the other but are likely the coincident result of a worldview.

I don't think it changes anything but thanks for the correction.


Correct. Entirely.

And I'm yuge on LLMs.

It is very much one of those things that makes me feel old and/or scared, because I don't believe this would have been swallowed as easily, say, 10 years ago.

As neutrally as possible, I think everyone can agree:

- There was a good but very long overview of LLMs from an ex-OpenAI employee. Good stuff, really well-written,

- Rapidly it concludes by hastily drawing a graph of "relative education level of AI" versus "year", draw a line from high school 2023 => college grad 2024 => phd 2025 => post-phd 2026 => agi 2027.

- Later, this gets published by same OpenAI guy, then the SlateStarCodex guy, and some other guy.

- You could describe it as taking the original, cut out all the boring leadup, jumped right to "AGI 2027", then wrote out a too-cute-by-half, way too long, geopolitics ramble about China vs. US.

It's mildly funny to me, in that yesteryear's contrarians are today's MSM, and yet, they face ~0 concerted criticism.

In the last comment thread on this article, someone jumped in to discuss the importance of more "experts in the field" contributing, meaning, psychiatrist Scott Siskind. The idea is writing about something makes you an expert, which leads us to tedious self-fellating like Scott's recent article letting us know LLMs don't have to have an assistant character, and how he predicted this years ago

It's not so funny, in that the next time a science research article is posted here, as is tradition, 30% will be claiming science writers never understand anything and can't write etc. etc.


Thank you for this comment, it is exactly my impression of all of this as well.


The point? MIRI and friends want more donations.


Well, yeah. Obviously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: