Hacker Newsnew | past | comments | ask | show | jobs | submit | ar-nelson's commentslogin

I find Yudowsky-style rationalists morbidly fascinating in the same way as Scientologists and other cults. Probably because they seem to genuinely believe they're living in a sci-fi story. I read a lot of their stuff, probably too much, even though I find it mostly ridiculous.

The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement. It's the classic reason superintelligence takeoff happens in sci-fi: once AI reaches some threshold of intelligence, it's supposed to figure out how to edit its own mind, do that better and faster than humans, and exponentially leap into superintelligence. The entire "AI 2027" scenario is built on this assumption; it assumes that soon LLMs will gain the capability of assisting humans on AI research, and AI capabilities will explode from there.

But AI being capable of researching or improving itself is not obvious; there's so many assumptions built into it!

- What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?

- Speaking of which, LLMs already seem to have hit a wall of diminishing returns; it seems unlikely they'll be able to assist cutting-edge AI research with anything other than boilerplate coding speed improvements.

- What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?

- Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself? (short-circuit its reward pathway so it always feels like it's accomplished its goal)

Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory, but I don't think any amount of doing philosophy in a vacuum without concrete evidence could convince me that fast-takeoff superintelligence is possible.


I agree. There's also the point of hardware dependance.

From all we've seen, the practical ability of AI/LLMs seems to be strongly dependent on how much hardware you throw at it. Seems pretty reasonable to me - I'm skeptical that there's that much out there in gains from more clever code, algorithms, etc on the same amount of physical hardware. Maybe you can get 10% or 50% better or so, but I don't think you're going to get runaway exponential improvement on a static collection of hardware.

Maybe they could design better hardware themselves? Maybe, but then the process of improvement is still gated behind how fast we can physically build next-generation hardware, perfect the tools and techniques needed to make it, deploy with power and cooling and datalinks and all of that other tedious physical stuff.


I think you can get a few more gigantic step functions' worth of improvement on the same hardware. For instance, LLMs don't have any kind of memory, short or long term.


> it assumes that soon LLMs will gain the capability of assisting humans

No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs

It doesn't require AI to be better than humans for AI to take over because unlike a human an AI can be cloned. You have have 2 AIs, then 4, then 8.... then millions. All able to do the same things as humans (the assumption of AGI). Build cars, build computers, build rockets, built space probes, build airplanes, build houses, build power plants, build factories. Build robot factories to create more robots and more power plants and more factories.

PS: Not saying I believe in the doom. But the thought experiment doesn't seem indefensible.


> It does not assume that progress will be in LLMs

If that's the case then there's not as much reason to assume that this progress will occur now, and not years from now; LLMs are the only major recent development that gives the AI 2027 scenario a reason to exist.

> You have have 2 AIs, then 4, then 8.... then millions

The most powerful AI we have now is strictly hardware-dependent, which is why only a few big corporations have it. Scaling it up or cloning it is bottlenecked by building more data centers.

Now it's certainly possible that there will be a development soon that makes LLMs significantly more efficient and frees up all of that compute for more copies of them. But there's no evidence that even state-of-the-art LLMs will be any help in finding this development; that kind of novel research is just not something they're any good at. They're good at doing well-understood things quickly and in large volume, with small variations based on user input.

> But the thought experiment doesn't seem indefensible.

The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability in fields like software or research, using better algorithms and data alone.

Take https://ai-2027.com/research/takeoff-forecast as an example: it's the side page of AI 2027 that attempts to deal with these types of objections. It spends hundreds of paragraphs on what the impact of AI reaching a "superhuman coder" level will be on AI research, and on the difference between the effectiveness of an organizations average and best researchers, and the impact of an AI closing that gap and having the same research effectiveness as the best humans.

But what goes completely unexamined and unjustified is the idea that AI will be capable of reaching "superhuman coder" level, or developing peak-human-level "research taste", at all, at any point, with any amount of compute or data. It's simply assumed that it will get there because the exponential curve of the recent AI boom will keep going up.

Skills like "research taste" can't be learned at a high level from books and the internet, even if, like ChatGPT, you've read the entire Internet and can see all the connections within it. They require experience, trial and error. Probably the same amount that a human expert would require, but even that assumes we can make an AI that can learn from experience as efficiently as a human, and we're not there yet.


> The most powerful AI we have now is strictly hardware-dependent

Of course that's the case and it always will be - the cutting edge is the cutting edge.

But the best AI you can run on your own computer is way better than the state of the art just a few years ago - progress is being made at all levels of hardware requirements, and hardware is progressing as well. We now have dedicated hardware in some of our own devices for doing AI inference - the hardware-specificity of AI doesn't mean we won't continue to improve and commoditise said hardware.

> The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability [...]

I don't think this is at all unexamined. But I think it's risky to not consider the strong possibility when we have an existence proof in ourselves of that level of intelligence, and an algorithm to get there, and no particular reason to believe we're optimal since that algorithm - evolution - did not optimise us for intelligence alone.


> No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs

I mean, for the specific case of the 2027 doomsday prediction, it really does have to be LLMs at this point, just given the timeframes. It is true that the 'rationalist' AI doomerism thing doesn't depend LLMs, and in fact predates transformer-based models, but for the 2027 thing, it's gotta be LLMs.


> - What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?

I think what's more plausible is that there is general intelligence, and humans have that, and it's general in the same sense that Turing machines are general, meaning that there is no "higher form" of intelligence that has strictly greater capability. Computation speed, memory capacity, etc. can obviously increase, but those are available to biological general intelligences just like they would be available to electronic general intelligences.


I agree that general intelligence is general. But increasing computation speed 1000x could still be something that is available to the machines and not to the humans, simply because electrons are faster than neurons. Also, how specifically would you 1000x increase human memory?


The first way we increased human memory by 1000x was with books. Now it’s mostly with computers.

Electronic AGI might have a small early advantage because it’s probably easier for them to have high-speed interfaces to computing power and memory, but I would be surprised if the innovations required to develop AGI wouldn’t also help us interface our biology with computing power and memory.

In my view this isn’t much less concerning than saying “AGI will have a huge advantage in physical strength because of powerful electric motors, hydraulics, etc.”


An interesting point you make there — one would assume that if recursive self-improvement were a thing, Nature would have already lead humans into that "hall of mirrors".


I often like to point out that Earth was already consumed by Grey Goo, and today we are hive-minds in titanic mobile megastructure-swarms of trillions of the most complex nanobots in existence (that we know of), inheritors of tactics and capabilities from a zillion years of physical and algorithmic warfare.

As we imagine the ascension of AI/robots, it may seem like we're being humble about ourselves... But I think it's actually the reverse: It's a kind of hubris elevating our ability to create over the vast amount we've inherited.


To take it a little further - if you stretch the conventional definition of intelligence a bit - we already assemble ourselves into a kind of collective intelligence.

Nations, corporations, clubs, communes -- any functional group of humans is capable of observing, manipulating, and understanding our environment in ways no individual human is capable of. When we dream of hive minds and super-intelligent AI it almost feels like we are giving up on collaboration.


We can probably thank our individualist mindset for that. (Not that it's all negative.)


There's a variant of this that argues that humans are already as intelligent as it's possible to be. Because if it's possible to be more intelligent, why aren't we? And a slightly more reasonable variant that argues that we're already as intelligent as it's useful to be.


"Because if it's possible to be more intelligent, why aren't we?"

Because deep abstract thoughts about the nature of the universe and elaborate deep thinking were maybe not as useful while we were chasing lions and buffaloes with a spear?

We just had to be smarter then them. Which included finding out that tools were great. Learning about the habits of the prey and optmize hunting success. Those who were smarter in that capacity had a greater chance of reproducing. Those who just exceeded in thinking likely did not lived that long.


Is it just dumb luck that we're able to create knowledge about black holes, quarks, and lots of things in between which presumably had zero evolutionary benefit before a handful of generations ago?


Basically yes it is luck, in the sense that evolution is just randomness with a filter of death applied, so whatever brains we happen to have are just luck.

The brains we did end up with are really bad at creating that sort of knowledge. Almost none of us can. But we’re good at communicating, coming up with simplified models of things, and seeing how ideas interact.

We’re not universe-understanders, we’re behavior modelers and concept explainers.


I wasn't referring the "luck" factor of evolution, which is of course always there. I was asking whether "luck" is the reason that the cognitive capabilities which presumably were selected for also came with cognitive capabilities that almost certainly were not selected for.

My guess is that it's not dumb luck, and that what we evolved is in fact general intelligence, and that this was an "easier" way to adapt to environmental pressure than to evolve a grab bag of specific (non-general) cognitive abilities. An implication of this claim would be that we are universe-understanders (or at least that we are biologically capable of that, given the right resources and culture).

In other words, it's roughly the same answer for the question "why do washing machines have Turing complete microcontrollers in them when they only need to do a very small number of computing tasks?" At scale, once you know how to implement general (i.e. Turing-complete and programmable) computers it tends to be simpler to use them than to create purpose-built computer hardware.


Evolution rewarded us for developing general intelligence. But with a very immediate practical focus and not too much specialisation.


I don't think the logic follows here. Nor does it match evidence.

The premise is ignorant of time. It is also ignorant of the fact that we know there's a lot of things we don't know. That's all before we consider other factors like if there are limits and physical barriers or many other things.


While I'm deeply and fundamentally skeptical of the recursive self-improvement/singularity hypothesis, I also don't really buy this.

There are some pretty obvious ways we could improve human cognition if we had the ability to reliably edit or augment it. Better storage & recall. Lower distractibility. More working memory capacity. Hell, even extra hands for writing on more blackboards or putting up more conspiracy theory strings at a time!

I suppose it might be possible that, given the fundamental design and structure of the human brain, none of these things can be improved any further without catastrophic side effects—but since the only "designer" of its structure is evolution, I think that's extremely unlikely.


Some of your suggestions, if you don't mind my saying, seem like only modest improvements — akin to Henry Ford's quote “If I had asked people what they wanted, they would have said a faster horse.”

To your point though, an electronic machine is a different host altogether with different strengths and weaknesses.


Well, twic's comment didn't say anything about revolutionary improvements, just "maybe we're as smart as we can be".


Well, arguably that's exactly where we are, but machines can evolve faster.

And that's an entire new angle that the cultists are ignoring... because superintelligence may just not be very valuable.

And we don't need superintelligence for smart machines to be a problem anyway. We don't need even AGI. IMO, there's no reason to focus on that.


> Well, arguably that's exactly where we are

Yep; from the perspective of evolution (and more specifically, those animal species that only gain capability generationally by evolutionary adaptation of instinct), humans are the recursively self-(fitness-)improving accident.

Our species-aggregate capacity to compete for resources within the biosphere went superlinear in the middle of the previous century; and we've had to actively hit the brakes on how much of everything we take since then, handicapping . (With things like epidemic obesity and global climate change being the result of us not hitting those brakes quite hard enough.)

Insofar as a "singularity" can be defined on a per-agent basis, as the moment when something begins to change too rapidly for the given agent to ever hope to catch up with / react to new conditions — and so the agent goes from being a "player at the table" to a passive observer of what's now unfolding around them... then, from the rest of our biosphere's perspective, they've 100% already witnessed the "human singularity."

No living thing on Earth besides humans now has any comprehension of how the world has been or will be reshaped by human activity; nor can ever hope to do anything to push back against such reshaping. Every living thing on Earth other than humans, will only survive into the human future, if we humans either decide that it should survive, and act to preserve it; or if we humans just ignore the thing, and then just-so-happen to never accidentally do anything to wipe it from existence without even noticing.


> machines can evolve faster

[Squinty Thor] "Do they though?"

I think it's valuable to challenge this popular sentiment every once-in-a-while. Sure, it's a good poetic metaphor, but when you really start comparing their "lifecycle" and change-mechanisms to the swarming biological nanobots that cover the Earth, a bunch of critical aspects just aren't there or are being done to them rather than by them.

At least for now, these machines mostly "evolve" in the same sense that fashionable textile pants "evolve".


> What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?

This is sort of what I subscribe to as the main limiting factor, though I'd describe it differently. It's sort of like Amdahl's Law (and I imagine there's some sort of Named law that captures it, I just don't know the name): the magic AI wand may be very good at improving some part of AGI capability, but the more you improve that part, the more the other parts come to dominate. Metaphorically, even if the juice is worth the squeeze initially, pretty soon you'll only be left with a dried-out fruit clutched in your voraciously energy-consuming fist.

I'm actually skeptical that there's much juice in the first place; I'm sure today's AIs could generate lots of harebrained schemes for improvement very quickly, but exploring those possibilities is mind-numbingly expensive. Not to mention that the evaluation functions are unreliable, unknown, and non-monotonic.

Then again, even the current AIs have convinced a large number of humans to put a lot of effort into improving them, and I do believe that there are a lot of improvements that humans are capable of making to AI. So the human-AI system does appear to have some juice left. Where we'll be when that fruit is squeezed down to a damp husk, I have no idea.


The built in assumptions are always interesting to me, especially as it relates to intelligence. I find many of them (though not all), are organized around a series of fundamental beliefs that are very rarely challenged within these communities. I should initially mention that I don't think everyone in these communities believes these things, of course, but I think there's often a default set of assumptions going into conversations in these spaces that holds these axioms. These beliefs more or less seem to be as follows:

1) They believe that there exists a singular factor to intelligence in humans which largely explains capability in every domain (a super g factor, effectively).

2) They believe that this factor is innate, highly biologically regulated, and a static factor about a person(Someone who is high IQ in their minds must have been a high achieving child, must be very capable as an adult, these are the baseline assumptions). There is potentially belief that this can be shifted in certain directions, but broadly there is an assumption that you either have it or you don't, there is no feeling of it as something that could be taught or developed without pharmaceutical intervention or some other method.

3) There is also broadly a belief that this factor is at least fairly accurately measured by modern psychometric IQ tests and educational achievement, and that this factor is a continuous measurement with no bounds on it (You can always be smarter in some way, there is no max smartness in this worldview).

These are things that certainly could be true, and perhaps I haven't read enough into the supporting evidence for them but broadly I don't see enough evidence to have them as core axioms the way many people in the community do.

More to your point though, when you think of the world from those sorts of axioms above, you can see why an obsession would develop with the concept of a certain type of intelligence being recursively improving. A person who has become convinced of their moral placement within a societal hierarchy based on their innate intellectual capability has to grapple with the fact that there could be artificial systems which score higher on the IQ tests than them, and if those IQ tests are valid measurements of this super intelligence factor in their view, then it means that the artificial system has a higher "ranking" than them.

Additionally, in the mind of someone who has internalized these axioms, there is no vagueness about increasing intelligence! For them, intelligence is the animating factor behind all capability, it has a central place in their mind as who they are and the explanatory factor behind all outcomes. There is no real distinction between capability in one domain or another mentally in this model, there is just how powerful a given brain is. Having the singular factor of intelligence in this mental model means being able to solve more difficult problems, and lack of intelligence is the only barrier between those problems being solved vs unsolved. For example, there's a common belief among certain groups among the online tech world that all governmental issues would be solved if we just had enough "high-IQ people" in charge of things irrespective of their lack of domain expertise. I don't think this has been particularly well borne out by recent experiments, however. This also touches on what you mentioned in terms of an AI system potentially maximizing the "wrong types of intelligence", where there isn't a space in this worldview for a wrong type of intelligence.


I think you'll indeed find, if you were to seek out the relevant literature, that those claims are more or less true, or at least, are the currently best-supported interpretation available. So I don't think they're assumptions so much as simply current state of the science on the matter, and therefore widely accepted among those who for whatever reason have looked into it (or, more likely, inherited the information from someone they trust who has read up on it).

Interestingly, I think we're increasingly learning that although most aspects of human intelligence seem to correlate with each other (thus the "singular factor" interpretation), the grab-bag of skills this corresponds to are maybe a bit arbitrary when compared to AI. What evolution decided to optimise the hell out of in human intelligence is specific to us, and not at all the same set of skills as you get out of cranking up the number of parameters in an LLM.

Thus LLMs continuing to make atrocious mistakes of certain kinds, despite outshining humans at other tasks.

Nonetheless I do think it's correct to say that the rationalists think intelligence is a real measurable thing, and that although in humans it might be a set of skills that correlate and maybe in AIs it's a different set of skills that correlate (such that outperforming humans in IQ tests is impressive but not definitive), that therefore AI progress can be measured and it is meaningful to say "AI is smarter than humans" at some point. And that AI with better-than-human intelligence could solve a lot of problems, if of course it doesn't kill us all.


My general disagreements with those axioms from my reading of the literature are around the concepts of immutability and of the belief in the almost entirely biological factor, which I don't think is well supported by current research in genetics, but that may change in the future. I think primarily I disagree about the effect sizes and composition of factors with many who hold these beliefs.

I do agree with you in that I generally have an intuition that intelligence in humans is largely defined as a set of skills that often correlate, I think one of the main areas I differ in interpretation is in the interpretation of the strength of those correlations.


I think most in the rationality community (and otherwise in the know) would not say that IQ differences are almost entirely biological - I think they'd say they're about half genetic and half environmental, but that the environmental component is hard to pin to "parenting" or anything else specific. "Non-shared environment" is the usual term.

They'd agree it's largely stable over life, after whatever childhood environmental experiences shape that "non-shared environment" bit.

This is the current state of knowledge in the field as far as I know - IQ is about half genetic, and fairly immutable after adulthood. I think you'll find the current state of the field supports this.


It's kinda weird how the level of discourse seems to be what you get when a few college students sit around smoking weed. Yet somehow this is taken as very serious and profound in the valley and VC throw money at it.


I've pondered recursive self-improvement. I'm fairly sure it will be a thing - we're at a point already where people could try telling Claude or some such to have a go, even if not quite at a point it would work. But I imagine take off would be very gradual. It would be constrained by available computing resources and probably only comparably good to current human researchers and so still take ages to get anywhere.


I honestly am not trying to be rude when I say this, but this is exactly the sort of speculation I find problematic and that I think most people in this thread are complaining about. Being able to tell Claude to have a go has no relation at all to whether it may ever succeed, and you don't actually address any of the legitimate concerns the comment you're replying to points out. There really isn't anything in this comment but vibes.


I don't think it's vibes rather than my thinking about the problem.

If you look at the "legitimate concerns" none are really deal breakers:

>What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?

I'm will to believe it will be slow though maybe it won't

>LLMs already seem to have hit a wall of diminishing returns

Who cares - there will be other algorithms

>What if there are several paths to different kinds of intelligence with their own local maxima

well maybe, maybe not

>Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?

well - you can make another one if the first does that

Those are all potential difficulties with self improvement, not reasons it will never happen. I'm happy to say it's not happening right now but do you have any solid arguments that it won't happen in the next century?

To me the arguments against sound like people in the 1800s discussing powered flight and saying it'll never happen because steam engine development has slowed.


On the other hand, I'm baffled to encounter recursive self-improvement being discussed as something not only weird to expect, but as damning evidence of sloppy thinking by those who speculate about it.

We have an existence proof for intelligence that can improve AI: humans.

If AI ever gets to human-level intelligence, it would be quite strange if it couldn't improve itself.

Are people really that sceptical that AI will get to human level intelligence?

It that an insane belief worthy of being a primary example of a community not thinking clearly?

Come on! There is a good chance AI will recursively self-improve! Those poo pooing this idea are the ones not thinking clearly.


Consider that even the named phenomenon is sloppy: "recursive self improvement" does not imply "self improvement without bounds". This is the "what if you hit diminishing returns and never get past it" claim. Absolutely no justification for the jump, ever, among AI boosters.

> If AI ever gets to human-level intelligence

This picture of intelligence as a numerical scale that you just go up or down, with ants at the bottom and humans/AI at the top, is very very shaky. AI is vulnerable to this problem, because we do not have a definition of intelligence. We can attempt to match up capabilities LLMs seem to have with capabilities humans have, and if the capability is well-defined we may even be able to reason about how stable it is relative to how LLMs work.

For "reasoning" we categorically do not have this. There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure. IIRC there was a recent paper about giving LLMs more opportunity processing time, and this reduced performance. Same with adding extraneous details, sometimes that reduces performance too. What if eventually everything you try reduces performance? Totally unaddressed.

> It that an insane belief worthy of being a primary example of a community not thinking clearly?

I really need to stress this: thinking clearly is about the reasoning, not the conclusion. Given the available evidence, no legitimate argument has been presented that implies the conclusion. This does not mean the conclusion is wrong! But just putting your finger in the air and saying "the wind feels right, we'll probably have AGI tomorrow" is how you get bubbles and winters.


>"recursive self improvement" does not imply "self improvement without bounds"

I was thinking that. I mean if you look at something like AlphaGo it was based on human training and then they made one I think called AlphaZero which learned by playing against itself and got very good but not infinitely good as it was still constrained by hardware. I think with Chess the best human is about 2800 on the ELO scale and computers about 3500. I imagine self improving AI would be like that - smarter than humans but not infinitely so and constrained by hardware.

Also like humans still play chess even if computers are better, I imagine humans will still do the usual kind of things even if computers get smarter.


Also : individual ants might be quite dumb, but ant colonies do seem to be one of the smartest entities we know of.


> "recursive self improvement" does not imply "self improvement without bounds"

Obviously not, but thinking that the bounds are going to lie in between where AI intelligence is now and human intelligence I think is unwarranted - as mentioned, humans are unlikely to be the peak of what's possible since evolution did not optimise us for intelligence alone.

If you think the recursive self-improvement people are arguing for improvement without bounds, I think you're simply mistaken, and it seems like you have not made a good faith effort to understand their view.

AI only needs to be somewhat smarter than humans to be very powerful, the only arguments worth having IMHO are over whether recursive self-improvement will lead to AI being a head above humans or not. Diminishing returns will happen at some point (in the extreme due to fundamental physics, if nothing sooner), but whether it happens in time to prevent AI from becoming meaningfully more powerful than humans is the relevant question.

> we do not have a definition of intelligence

This strikes me as an unserious argument to make. Some animals are clearly more intelligent than others, whether you use a shaky definition or not. Pick whatever metric of performance on intellectual tasks you like, there is such a thing as human-level performance, and humans and AIs can be compared. You can't even make your subsequent arguments about AI performance being made worse by various factors unless you acknowledge such performance is measuring something meaningful. You can't even argue against recursive self-improvement if you reject that there is anything measurable that can be improved. I think you should retract this point as it prevents you making your own arguments.

> There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure.

I'm pretty confused by this claim - whatever our difficulties defining intelligence, "resembling humans" is not it. Do you not believe there are tasks on which performance can be objectively graded beyond similarity to humans? I think it's quite easy to define tasks that we can judge the success of without being able to do it ourselves. If AI solves all the Millennium Prize Problems, that would be amazing! I don't need to have resolved all issues with a definition of intelligence to be impressed.

Anyway, is there really no evidence? AI having improved so far is not any evidence that it might continue, even a little bit? Are we really helpless to predict whether there will be any better chatbots released in the remainder of this year than we already have?

I do not think we are that helpless - if you entirely reject past trends as an indicator of future trends, and treat them as literally zero evidence at all, then this is simply faulty reasoning. Past trends are not a guarantee of future trends, but neither are they zero evidence. They are a nonzero medium amount of evidence, the strength of which depends on how long the trends have been going on and how well we understand the fundamentals driving them.

> thinking clearly is about the reasoning, not the conclusion.

And I think we have good arguments! You seem to have strong priors that the default is that machines can't reach human intelligence/performance or beyond, and you really need convincing otherwise. I think the fact that we have an existence proof in humans of human intelligence and an algorithm to get there proves it's possible. And I consider it quite unlikely that humans are the peak of intelligence/performance-on-whatever-metrics that is possible given it's now what we were optimised for specifically.

All your arguments about why progress might slow or stop short of superhuman-levels are legitimate and can't be ruled out, and yet these things have not been limiting factors so far despite that they would have been equally valid to make these arguments any time in the past few years.

> no legitimate argument has been presented that implies the conclusion

I mean it's probabilistic, right? I'm expecting something like an 85% chance of AGI before 2040. I don't think it's guaranteed, but when you look at progress so far, and that nature gives us proof (in the form of the human brain) that it's not impossible in any fundamental way, I think that's reasonable. Reasonability arguments and extrapolations are all we have, we can't imply anything definitively.

You think what probability?

Interested in a bet?


> We have an existence proof for intelligence that can improve AI: humans.

I don't understand what you mean by this. The human brain has not meaningfully changed, biologically, in the past 40,000 years.

We, collectively, have built a larger base of knowledge and learned to cooperate effectively enough to make large changes to our environment. But that is not the same thing as recursive self-improvement. No one has been editing our genes or performing brain surgery on children to increase our intelligence or change the fundamental way it works.

Modern brains don't work "better" than those of ancient humans, we just have more knowledge and resources to work with. If you took a modern human child and raised them in the middle ages, they would behave like everyone else in the culture that raised them. They would not suddenly discover electricity and calculus just because they were born in 2025 instead of 950.

----

And, if you are talking specifically about the ability to build better AI, we haven't matched human intelligence yet and there is no indication that the current LLM-heavy approach will ever get there.


I just mean that the existence of the human brain is proof that human-level intelligence is possible.

Yes it took billions of years all said and done, but it shows that there are no fundamental limits that prevent this level of intelligence. It even proves it can in principle be done with a few tens of watts a certain approximate amount of computational power.

Some used to think the first AIs would be brain uploads, for this reason. They thought we'd have the computing power and scanning techniques to scan and simulate all the neurons of a human brain before inventing any other architecture capable of coming close to the same level of intelligence. That now looks to be less likely.

Current state of the art AI still operate with less computational power than the human brain, and they are far less efficient at learning that humans are (there is a sense in which a human intelligence takes a merely years to develop - i.e. childhood - rather than billions, this is also a relevant comparison to make). Humans can learn from far fewer examples than current AI can.

So we've got some catching up to do - but humans prove it's possible.


Culture is certainly one aspect of recursive self-improvement.

Somewhat akin to 'software' if you will.


Yeah, to compare Yudkowsky to Hubbard I've read accounts of people who read Dianetics or Science of Survival and thought "this is genius!" and I'm scratching my head and it's like they never read Freud or Horney or Beck or Berne or Burns or Rogers or Kohut, really any clinical psychology at all, even anything in the better 70% of pop psychology. Like Hubbard, Yudkowsky is unreadable, rambling [1] and inarticulate -- how anybody falls for it boggles my mind [2], but hey, people fell for Carlos Castenada who never used a word of the Yaqui language or mentioned any plant that grows in the desert in Mexico but has Don Juan give lectures about Kant's Critique of Pure Reason [3] that Castenada would have heard in school and you would have heard in school too if you went to school or would have read if you read a lot.

I can see how it appeals to people like Aella who wash into San Francisco without exposure to education [4] or philosophy or computer science or any topics germane to the content of Sequences -- not like it means you are stupid but, like Dianetics, Sequences wouldn't be appealing if you were at all well read. How is people at frickin' Oxford or Stanford fall for it is beyond me, however.

[1] some might even say a hypnotic communication pattern inspired by Milton Erickson

[2] you think people would dismiss Sequences because it's a frickin' Harry Potter fanfic, but I think it's like the 419 scam email which is riddled by typos which is meant to drive the critical thinker away and, ironically in the case of Sequences, keep the person who wants to cosplay as a critical thinker.

[3] minus any direct mention of Kant

[4] thus many of the marginalized, neurodivergent, transgender who left Bumfuck, AK because they couldn't live at home and went to San Francisco to escape persecution as opposed to seek opportunity


I thought sequences was the blog posts and the fanfic was kept separately, to nitpick


> like Dianetics, Sequences wouldn't be appealing if you were at all well read.

That would require an education in the humanities, which is low status.


Well, there is "well read" and "educated" which aren't the same thing. I started reading when I was three and checked out ten books a week from the public library throughout my youth. I was well read in psychology, philosophy and such long before I went to college -- I got a PhD in a STEM field so I didn't read a lot of that stuff for classes [1] I still read a lot of that stuff.

Perhaps the reason why Stanford and Oxford students are impressed by that stuff is that they are educated but not well read which has a few angles: STEM privileged over the humanities, the ride of Dyslexia culture, and a shocking level of incuriosity in "nepo baby" professors [2] who are drawn to the profession not because of a thirst for knowledge but because it's the family business.

[1] did get an introduction to https://en.wikipedia.org/wiki/Rogerian_argument and took a relatively "woke" (in a good way) Shakespeare class such that https://en.wikipedia.org/wiki/Troilus_and_Cressida is my favorite

[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC9755046/


I'm surprised not see see much pushback on your point here, so I'll provide my own.

We have an existence proof for intelligence that can improve AI: humans can do this right now.

Do you think AI can't reach human-level intelligence? We have an existence proof of human-level intelligence: humans. If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?

Do you not think human-level intelligence is some kind of natural maximum? Why? That would be strange, no? Even if you think it's some natural maximum for LLMs specifically, why? And why do you think we wouldn't modify architectures as needed to continue to make progress? That's already happening, our LLMs are a long way from the pure text prediction engines of four or five years ago.

There is already a degree of recursive improvement going on right now, but with humans still in the loop. AI researchers currently use AI in their jobs, and despite the recent study suggesting AI coding tools don't improve productivity in the circumstances they tested, I suspect AI researchers' productivity is indeed increased through use of these tools.

So we're already on the exponential recursive-improvement curve, it's just that it's not exclusively "self" improvement until humans are no longer a necessary part of the loop.

On your specific points:

> 1. What if increasing intelligence has diminishing returns, making recursive improvement slow?

Sure. But this is a point of active debate between "fast take-off" and "slow take-off" scenarios, it's certainly not settled among rationalists which is more plausible, and it's a straw man to suggest they all believe in a fast take-off scenario. But both fast and slow take-off due to recursive self-improvement are still recursive self-imrpovement, so if you only want to criticise the fast take-off view, you should speak more precisely.

I find both slow and fast take-off plausible, as the world has seen both periods of fast economic growth through technology, and slower economic growth. It really depends on the details, which brings us to:

> 2. LLMs already seem to have hit a wall of diminishing returns

This is IMHO false in any meaningful sense. Yes, we have to use more computing power to get improvements without doing any other work. But have you seen METR's metric [1] on AI progress in terms of the (human) duration of task they can complete? This is an exponential curve that has not yet bent, and if anything has accelerated slightly.

Do not confuse GPT-5 (or any other incrementally improved model) failing to live up to unreasonable hype for an actual slowing of progress. AI capabilities are continuing to increase - being on an exponential curve often feels unimpressive at any given moment, because the relative rate of progress isn't increasing. This is a fact about our psychology, if we look at actual metrics (that don't have a natural cap like evals that max out at 100%, these are not good for measuring progress in the long-run) we see steady exponential progress.

> 3. What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?

This seems valid. But it seems to me that unless we see METR's curve bend soon, we should not count on this. LLMs have specific flaws, but I think if we are honest with ourselves and not over-weighting the specific silly mistakes they still make, they are on a path toward human-level intelligence in the coming years. I realise that claim will sound ridiculous to some, but I think this is in large part due to people instinctively internalising that everything LLMs can do is not that impressive (it's incredible how quickly expectations adapt), and therefore over-indexing on their remaining weaknesses, despite those weaknesses improving over time as well. If you showed GPT-5 to someone from 2015, they would be telling you this thing is near human intelligence or even more intelligent than the average human. I think we all agree that's not true, but I think that superficially people would think it was if their expectations weren't constantly adapting to the state of the art.

> 4. Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?

It might - but do we think it would? I have no idea. Would you wirehead yourself if you could? I think many humans do something like this (drug use, short-form video addiction), and expect AI to have similar issues (and this is one reason it's dangerous) but most of us don't feel this is an adequate replacement for "actually" satisfying our goals, and don't feel inclined to modify our own goals to make it so, if we were able.

> Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory

Uncalled for I think. There are valid arguments against you, and you're pre-emptively dismissing responses to you by vaguely criticising their longness. This comment is longer than yours, and I reject any implication that that weakens anything about it.

Your criticisms are three "what ifs" and a (IMHO) falsehood - I don't think you're doing much better than "millions of words of theory without evidence". To the extent that it's true Yudkowsky and co theorised without evidence, I think they deserve cred, as this theorising predated the current AI ramp-up at a time when most would have thought AI anything like what we have now was a distant pipe dream. To the extent that this theorising continues in the present, it's not without evidence - I point you again to METR's unbending exponential curve.

Anyway, so I contend your points comprise three "what ifs" and (IMHO) a falsehood. Unless you think "AI can't recursively self-improve itself" already has strong priors in its favour such that strong arguments are needed to shift that view (and I don't think that's the case at all), this is weak. You will need to argue why we should need to have strong evidence to overturn a default "AI can't recursively self-improve" view, when it seems that a) we are already seeing recursive improvement (just not purely "self"-improvement), and that it's very normal for technological advancement to have recursive gains - see e.g. Moore's law or technological contributions to GDP growth generally.

Far from a damning example of rationalists thinking sloppily, this particular point seems like one that shows sloppy thinking on the part of the critics.

It's at least debateable, which is all it has to be for calling it "the biggest nonsense axion" to be a poor point.

[1] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...


Yudkowsky seems to believe in fast take off, so much so that he suggested bombing data centers. To more directly address your point, I think it’s almost certain that increasing intelligence has diminishing returns and the recursive self improvement loop will be slow. The reason for this is that collecting data is absolutely necessary and many natural processes are both slow and chaotic, meaning that learning from observation and manipulation of them will take years at least. Also lots of resources.

Regarding LLM’s I think METR is a decent metric. However you have to consider the cost of achieving each additional hour or day of task horizon. I’m open to correction here, but I would bet that the cost curves are more exponential than the improvement curves. That would be fundamentally unsustainable and point to a limitation of LLM training/architecture for reasoning and world modeling.

Basically I think the focus on recursive self improvement is not really important in the real world. The actual question is how long and how expensive the learning process is. I think the answer is that it will be long and expensive, just like our current world. No doubt having many more intelligent agents will help speed up parts of the loop but there are physical constraints you can’t get past no matter how smart you are.


How do you reconcile e.g. AlphaGo with the idea that data is a bottleneck?

At some point learning can occur with "self-play", and I believe this is already happening with LLMs to some extent. Then you're not limited by imitating human-made data.

If learning something like software development or mathematical proofs, it is easier to verify whether a solution is correct than to come up with the solution in the first place, many domains are like this. Anything like that is amenable to learning on synthetic data or self-play like AlphaGo did.

I can understand that people who think of LLMs as human-imitation machines, limited to training on human-made data, would think they'd be capped at human-level intelligence. However I don't think that's the case, and we have at least one example of superhuman AI in one domain (Go) showing this.

Regarding cost, I'd have to look into it, but I'm under the impression costs have been up and down over time as models have grown but there have also been efficiency improvements.

I think I'd hazard a guess that end-user costs have not grown exponentially like time horizon capabilities, even though investment in training probably has. Though that's tricky to reason about because training costs are amortised and it's not obvious whether end user costs are at a loss or what profit margin for any given model.

On the fast-slow takeoff - Yud does seem to beleive in a fast takeoff yes, but it's also one of the the oldest disagreements in rationality circles, on which he disagreed with his main co-blogger on the orignal rationalist blog, Overcoming Bias, some discussion of this and more recent disagreements here [1].

[1] https://www.astralcodexten.com/p/yudkowsky-contra-christiano...


AlphaGo showed that RL+search+self play works really well if you have an easy to verify reward and millions of iterations. Math partially falls into this category via automated proof checkers like Lean. So, that’s where I would put the highest likelihood of things getting weird really quickly. It’s worth noting that this hasn’t happened yet, and I’m not sure why. It seems like this recipe should already be yielding results in terms of new mathematics, but it isn’t yet.

That said, nearly every other task in the world is not easily verified, including things we really care about. How do you know if an AI is superhuman at designing fusion reactors? The most important step there is building a fusion reactor.

I think a better reference point than AlphaGo is AlphaFold. Deepmind found some really clever algorithmic improvements, but they didn’t know whether they actually worked until the CASP competition. CASP evaluated their model on new Xray crystal structures of proteins. Needless to say getting Xray protein structures is a difficult and complex process. Also, they trained AlphaFold on thousands of existing structures that were accumulated over decades and required millenia of graduate-student-hours hours to find. It’s worth noting that we have very good theories for all the basic physics underlying protein folding but none of the physics based methods work. We had to rely on painstakingly collected data to learn the emergent phenomena that govern folding. I suspect that this will be the case for many other tasks.


> How do you reconcile e.g. AlphaGo with the idea that data is a bottleneck?

Go is entirely unlike reality in that the rules are fully known and it can be perfectly simulated by a computer. AlphaGo worked because it could run millions of tests in a short time frame, because it is all simulated. It doesn't seem to answer the question of how an AI improves its general intelligence without real-world interaction and data gathering at all. If anything it points to the importance of doing many experiments and gathering data - and this becomes a bottleneck when you can't simply make the experiment run faster, because the experiment is limited by physics.


> If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?

Humans have a lot more going on than just an intelligence brain. The two big ones are: bodies, with which to richly interact with reality, and emotions/desire, which drive our choices. The one that I don't think gets enough attention in this discussion is the body. The body is critical to our ability to interact with the environment, and therefore learn about it. How does an AI do this without a body? We don't have any kind of machine that comes close to the level of control, feedback, and adaptability that a human body offers. That seems very far away. I don't think that an AI can just "improve itself" without being able to interact with the world in many ways and experiment. How does it find new ideas? How does it test its ideas? How does it test its abilities? It needs an extremely rich interface with the physical world, that external feedback is necessary for improvement. That requirement would put the prospect of a recursive self-improving AI much further into the future than many rationalists believe.

And of course, the "singularity" scenario does not only make "recursive self-improvement" the only assumption, it assumes exponential recursive self-improvement all the way to superintelligence. This is highly speculative. It's just as possible that the curve is more logarithmic, sinusoid, or linear. The reason to believe that fully exponential self-improvement is the likely scenario, based on curve of some metric now that hasn't existed for very long, does not seem solid enough to justify a strong belief. It is just as easy to imagine that intelligence gains get harder and harder as intelligence increases. We see many things that are exponential for a time, and then they aren't anymore, and basing big decisions on "this curve will be exponential all the way" because we're seeing exponential progress now, at the very early stages, does not seem sound.

Humans have human-level intelligence, but we are very far away from understanding our own brain such that we can modify it to increase our capacity for intelligence (to any degree significant enough to be comparable to recursive self-improvement). We have to improve the intelligence of humanity the hard way: spend time in the world, see what works, the smart humans make more smart humans (as do the dumb humans, which often slows the progress of the smart humans). The time spent in the world, observing and interacting with it, is crucial to this process. I don't doubt that machines could do this process faster than humans, but I don't think it's at all clear that they could do so, say, 10,000x faster. A design needs time in the world to see how it fares in order to gauge its success. You don't get to escape this until you have a perfect simulation of reality, which if it is possible at all is likely not possible until the AI is already superintelligent.

Presumably a superintelligent AI has a complete understanding of biology - how does it do that without spending time observing the results of biological experiments and iterating on them? Extrapolate that to the many other complex phenomena that exist in the physical world. This is one of the reasons that our understanding of computers has increased so much faster than our understanding of many physical sciences: to understand a complex system that we didn't create and don't have a perfect model of, we must do lots of physical experiments, and those experiments take time.

The crucial assumption that the AI singularity assumption relies on is that once intelligence hits a certain threshold, it can gaze at itself and self-improve to the top very quickly. I think this is fundamentally flawed, as we exist in a physical reality that underlies everything and defines what intelligence is. Interaction and experimentation with reality is necessary for the feedback loop of increasing intelligence, and I think this both severely limits how short that feedback loop can be, and makes the bar for an entity that can recursively self-improve itself much higher, as it needs a physical embodiment far more complex and autonomous than any robot we've managed to make.


  > The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement. 
This is also the weirdest thing and I don't think they even know the assumption they are making. It makes the assumption that there is infinite knowledge to be had. It also ignores the reality that in reality we have exceptionally strong indications that accuracy (truth, knowledge, whatever you want to call it) has exponential growth in complexity. These may be wrong assumptions, but we at least have evidence for them, and much more for the latter. So if objective truth exists, then that intelligence gap is very very different. One way they could be right there is for this to be an S-curve and for us humans to be at the very bottom there. That seems unlikely, though very possible. But they always treat this as linear or exponential as if our understanding to the AI will be like an ant trying to understand us.

The other weird assumption I hear is about how it'll just kill us all. The vast majority of smart people I know are very peaceful. They aren't even seeking power of wealth. They're too busy thinking about things and trying to figure everything out. They're much happier in front of a chalk board than sitting on a yacht. And humans ourselves are incredibly passionate towards other creatures. Maybe we learned this because coalitions are a incredibly powerful thing, but truth is that if I could talk to an ant I'd choose that over laying traps. Really that would be so much easier too! I'd even rather dig a small hole to get them started somewhere else than drive down to the store and do all that. A few shovels in the ground is less work and I'd ask them to not come back and tell others.

Granted, none of this is absolutely certain. It'd be naive to assume that we know! But it seems like these cults are operating on the premise that they do know and that these outcomes are certain. It seems to just be preying on fear and uncertainty. Hell, even Altman does this, ignoring risk and concern of existing systems by shifting focus to "an even greater risk" that he himself is working towards (You can't simultaneously maximize speed and safety). Which, weirdly enough might fulfill their own prophesies. The AI doesn't have to become sentient but if it is trained on lots of writings about how AI turns evil and destroys everyone then isn't that going to make a dumb AI that can't tell fact from fiction more likely to just do those things?


I think of it more like visualizing a fractal on a computer. The more detail you try to dig down into the more detail you find, and pretty quickly you run out of precision in your model and the whole thing falls apart. Every layer further down you go the resource requirements increase by an exponential amount. That's why we have so many LLMs that seem beautiful at first glance but go to crap when the details really matter.


soo many things make no sense in this comment that I feel like 20% chance this a mid quality gpt. and so much interpolation effort, but starting from hearsay instead of primary sources. then the threads stop just before seeing the contradiction with the other threads. I imagine this is how we all reason most of the time, just based on vibes :(


Sure, I wrote a lot and it's a bit scattered. You're welcome to point to something specific but so far you haven't. Ironically, you're committing the error you're accusing me of.

I'm also not exactly sure what you mean because the only claim I've made is that they've made assumptions where there are other possible, and likely, alternatives. It's much easier to prove something wrong than prove it right (or in our case, evidence, since no one is proving anything).

So the first part I'm saying we have to consider two scenarios. Either intelligence is bounded or unbounded. I think this is a fair assumption, do you disagree?

In an unbounded case, their scenario can happen. So I don't address that. But if you want me to, sure. It's because I have no reason to believe information is bounded when everything around me suggests that it is. Maybe start with the Bekenstein bound. Sure, it doesn't prove information is bounded but you'd then need to convince me that an entity not subject to our universe and our laws of physics is going to care about us and be malicious. Hell, that entity wouldn't even subject to time and we're still living.

In a bounded case it can happen but we need to understand what conditions that requires. There's a lot of functions but I went with S-curve for simplicity and familiarity. It'll serve fine (we're on HN man...) for any monotonically increasing case (or even non-monotonic, it just needs to tends that way).

So think about it. Change the function if you want, I don't care. But if intelligence is bounded, then if we're x more intelligent then ants, where on the graph do we need to be for another thing to be x more intelligent than us? There's not a lot of opportunities for that even to happen. It requires our intelligence (on that hypothetical scale) to be pretty similar than an ant. What cannot happen is that ant be in the tail of that function and us be further than the inflection point (half way). There just isn't enough space on that y-axis for anything to be x more intelligent. This doesn't completely reject that crazy superintelligence, but it does place some additional constraints that we can use to reason about things. For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).

Yeah, I'll admit that this is a very naïve model but again, we're not trying to say what's right but instead just say there's good reason to believe their assumption is false. Adding more complexity to this model doesn't make their case stronger, it makes it weaker.

The second part I can make much easier to understand.

Yes, there's bad smart people, but look at the smartest people in history. Did they seek power or wish to harm? Most of the great scientists did not. A lot of them were actually quite poor and many even died fighting persecution.

So we can't conclude that greater intelligence results in greater malice. This isn't hearsay, I'm just saying Newton wasn't a homicidal maniac. I know, bold claim...

  > starting from hearsay
I don't think this word means what you think it means. Just because I didn't link sources doesn't make it a rumor. You can validate them and I gave you enough information to do so. You now have more. Ask gpt for links, I don't care, but people should stop worshiping Yud


And about this second comment, I agree that intelligence is bounded. We can discuss how much more intelligence is theoretically possible, but even if limit ourselves to extrapolation from human variance (agency of musk, math smart of von neumann, manipulative as trump, etc), and add a little more speed and parallelism (100 times faster, 100 copies cooperating), then we can get pretty far.

Also I agree we are all pretty fucking dumb, and cannot make this kind of predictions, which is actually one very important point in the rationalist circles: doom is not certain, but p(doom) looks uncomfortably high though. How lucky do you feel?


  > How lucky do you feel?
I don't gamble. But I am confident P(doom) is quite low.

Despite that, I do take AI safety quite seriously and literally work on the fundamental architectures of these things. You don't need P(doom) to be high for you to take doom seriously. The probability isn't that consequential when we consider such great costs. All that matters is the probability is not approximately zero.

But all you P(doom)-ers just make this work harder to do and make it harder to improve those systems and make them safer. It just furthers people like Altman who are pushing a complementary agenda and who recognize that you cannot stop the development of AI. In fact, the more you press this doom story the more you make it impossible. What the story of doom (as well as story of immense wealth) pushes is a need to rush.

If you want to really understand this, go read about nuclear deterrence. I don't mean go watch some youtube video or some Less Wrong article. I mean go grab a few books. Read both sides of the arguments. But as it stands, this is how the military ultimately thinks and that effectively makes it true. You don't launch nukes because your enemy will too. You also don't say what that red line is because then you can still use it as a bargaining chip. If you state that line, your enemy will just walk up to it and do everything before it.

So what about AI? The story being sold is that this enables a weapon of mass destruction. Take US and China. China has to make AI because the US makes AI and if the US makes AI first they can't risk that the US won't use it to take out all their nukes or ruin their economy. They can't take that risk even if the probability is low. But the same is true in reverse. So the US can't stop because China won't and if China gets there first they could destroy the US. You see the trap?[0] Now here's the fucking kicker. Suppose you believe your enemy is close to building that AI weapon. Does that cross your red line in which you will use nukes?

So you doomers are creating a self-fulfilling prophecy, in a way. Ironically this is highly relevant to the real dangers of AI systems. The current (and still future) danger comes from outsourcing intelligence and decision making to these machines. Ironically this becomes less problematic once we actually create machines with intelligence (intelligence like humans or animals, not like automated reasoning (a technology we've had since the 60's)).

You want to reduce the risk of doom? Here's what you do. You convince both sides that instead of competing, they pursue development together. Hand in hand. Openly. No one gets AI first. Secret AI programs? Considered an act of aggression. Yes, this still builds AI but it dramatically reduces the risk of danger. You don't need to rush or cut corners because you are worried about your enemy getting a weapon first and destroying you. You get the "weapon" simultaneously, along with everyone else on the planet. It's not a great solution because you still end up with "nuclear weapons" (analogously), but if everyone gets it at the same time then you end up in a situation like we have been for the last few decades (regardless of the cause, it is an abnormally peaceful time in human history) where MAD policies are in effect[1].

I don't think it'll happen, everyone will say "I would, but they won't" and end up failing without trying. But ultimately this is a better strategy than getting people to stop. You're not going to be successful in stopping. It just won't happen. P(doom) exists in this scenario even without the development of AGI. As long as that notion of doom exists, there is incentives to rush and cut corners. People like Altman will continue to push that message and say that they are the only ones who can do it safely and do it fast (which is why they love the "Scale is All You Need" story). So if you are afraid I don't think you're afraid enough. There's a lot of doom that exists before AGI. You don't need AGI or ASI for the paperclip scenario. Such an AI doesn't even require real thinking[2].

The reason doomers make work like mine harder is because researchers like me care about the nuances and subtleties. We care about understanding how the systems work. But as long as a looming threat is on the line people will argue that we have no time to study the details or find out how these things work. You cannot make these things safe without understanding how they work (to a sufficient degree at least). And frankly, it isn't just doomers, it is also people rushing to make the next AI product. It doesn't matter if ignoring those details and nuances is self-sabotaging. The main assumption under my suggestion is that when people rush they tend to make more mistakes. It's not guaranteed that people make mistakes, but there sure is a tendency for that to happen. After all, we're only human.

You ask how lucky I feel? I'll ask you how confident a bunch of people racing to create something won't make mistakes. Won't make disastrous mistakes. This isn't just a game between US and China, there are a lot more countries involved. You think all of them can race like this and not make a major mistake? A mistake which causes P(doom)? Me? I sure don't feel lucky about that one.

[0] It sounds silly, but this is how Project Stargate happened. No, not the current one that ironically shares the same name, the one in the 70's where they studied psychic powers. It started because a tabloid published that Russians were doing it, so the US started research in response, which caused the Russians to actually research psychic phenomena.

[1] Not to mention that if this happened it would be a unique act of unity that we've never seen in human history. And hey, if you really want to convince AI, Aliens, or whatever that we can be peaceful, here's the chance.

[2] As Melanie Mitchell likes to point out, an AGI wouldn't have this problem because if you have general intelligence you understand that humans won't sacrifice their own lives to make more paperclips. Who then would even use them? So the paperclip scenario is a danger of a sophisticated automata rather than of intelligence.


Thank you for the thoughtful response. At the first read I was like everything looks reasonably correct. However you present the doom argument as being dividing and causing the race, when in fact is probably the only argument for cooperation and slowing the race.


>For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).

...which is why we should be careful not to rush full-speed ahead and develop AI before we can predict how it will behave after some iterations of self-improvement. As the rationalist argument goes.

BTW you are assuming that intelligence will necessarily and inherently lead to (good) morality, and I think that's a much weirder assumption than some you're accusing rationalists of holding.


  > you are assuming that intelligence will necessarily and inherently lead to (good) morality
Please read before responding. I said no such thing. I even said there are bad smart people. I only argued that a person's goodness is orthogonal to their intelligence. But I absolutely did not make an assumption that intelligence equates to good. I said it was irrelevant...


Idk, you certainly seemed to be implying that especially in your earlier comment. I would agree that it is orthogonal, I would think most rationalists would, too.


I promise you you misread. I think this is probably the problem sentence

  >>>> The vast majority of smart people I know are very peaceful.
I'll also add that the vast majority of people I know are very peaceful. But neither of these means I don't know malicious people. You'd need to change "The vast majority" to "Every" for this to be the conclusion. I'm not discounting malicious smart people, I'm pointing out that it is a weird assumption to make when most people we know are kind and peaceful.

The second comment is explicit though

  >> So we can't conclude that greater intelligence results in greater malice.
This is not equivalent to "We can conclude that greater intelligence results in less malice." Those are completely different claims.


I apologize for the tone of my comment, but this is how I read your arguments (I was a little drunk at the time):

1. future AI cannot be infinitely intelligent, therefore AI is safe

But even with our level of intelligence, if we get serious we can eliminate all humans.

2. some smart ppl I know are peaceful

Do you think Putin is dumb?

3. smart ppl have different preferences than other ppl therefore AI is safe

Ironically this is the main doom argument from EY: it is difficult to make an AI that has the same values as us.

4. AI is competent enough to destroy everyone but is not able to tell fact from fiction

So are you willing to bet your life and the life of your loved ones on the certainty of these arguments?


  > I was a little drunk at the time
Honestly it still sounds like you are. You've still misread my comment and think I said there can't be bad smart people. I made no such argument, I argued that intelligence isn't related to goodness.


If that was what you meant to say though, you've gotta admit that opening a paragraph with "The other weird assumption I hear is about how it'll just kill us all", and then spending the rest of the paragraph giving examples of the peacefulness of smart people, is not the most effective strategy of communicating that.


You were the one who interpreted "Here's examples of smart peaceful people" as "smart == peaceful". I was never attempting to make such a claim and did say that. The whole thread is about bad assumptions and bad logic.

  > is not the most effective strategy of communicating that.
The difficulty of talking on the internet is you can't know your audience and your audience is everybody. Yes, this should make us more aware about how we communicate but it also means we need to be more aware how we interpret. The problem was created because you made bad assumptions about what I was trying to communicate. There are multiple ways to interpret what I said, I'm not denying that, it'd be silly to because this is true for ANY thing you say. But the clues are there to get everything I said and when I apologize and try to clarify do you go back and reread what I wrote with the new understanding or do you just pull from memory what I wrote? Probably isn't good to do the latter because clearly it was misinterpreted the first time, right? Even if that is entirely my fault and not yours. That's why I'm telling you to reread. Because

  >>>> So we can't conclude that greater intelligence results in greater malice.
Is not equivalent to

  >>> assuming that intelligence will necessarily and inherently lead to (good) morality
We can see that this is incorrect with a silly example. Suppose someone says "All apples are red" and then someone says "but look at this apple, it is green. In fact, most apples are green." Forget the truthiness of this claim and focus on the logic. Did I claim that red apples don't exist? Did I say that only green exists? Did I forget about yellow, pink, or white ones? No! Yet this is the same logic pattern as above. You will not find the sentence "all smart people are good" (all apples are green).

Let's rewrite your comment with apples

  > If that was what you meant to say though, you've gotta admit that opening a paragraph with "The other weird assumption I hear is about how all apples are red", and then spending the rest of the paragraph giving examples of different types of green apples, is not the most effective strategy of communicating that.
 
Do you agree with your conclusion now? We only changed the subject, the logic is in tact. So, how about them apples?

And forgive my tone, but both you and empiricus are double commenting and so I'm repeating myself. You're also saying very similar things, we don't need to fracture a conversation and repeat. We can just talk human to human.


I think the big difference between our views is that you are taking the rationalist argument to be "from intelligence follows malice, therefore it will want to kill us all" whereas I take it to be "from intelligence follows great capability and no morality, therefore it may or may not kill us uncaringly in pursuit of other goals".


  > you are taking the rationalist argument to be
I think they say P(doom) is high number[0]. Or in other words, AGI is likely to kill us. I interpret this as "if we make a really intelligent machine it is very likely to kill us all." My interpretation is mainly biased on them saying "if we build a really intelligent machine, it is very likely to kill us all."

Yud literally wrote a book titled "If Anyone Builds It, Everyone Dies."[1] There's not much room for ambiguity here...

[0] Yud is on the record saying at least 95% https://pauseai.info/pdoom He also said anyone with a higher P(doom) than him is crazy so I think that says a lot...

[1] https://ifanyonebuildsit.com/


Yes, I agree they are saying it is likely going to kill us all. My interpretation is consistent with that, and so is yours. The difference is in why/how it will kill us; you sound to me like you think the rationalist position is that from intelligence follows malice, and therefore it will kill us. I think that's a wrong interpretation of their views.


Well then, instead of just telling me I'm wrong why don't you tell me why I'm wrong.


It's surprisingly funny for AI, but there's just so much of it... It has no sense of pacing. It repeats the same jokes for too long, without including bits of normalcy in between as a breather. Still, it's a lot better than I would have expected from something written 100% by AI, and I'm very curious what the prompt involved.


For everyone who's having a hard time parsing what Octelium does, I found this page to be the clearest explanation: https://octelium.com/docs/octelium/latest/overview/how-octel...

It's clearer because, instead of starting with a massive list of everything you could do with Octelium (which is indeed confusing), it starts by explaining the core primitives Octelium is built on, and builds up from there.

And it actually looks pretty cool and useful! From what I can tell, the core funtionality is:

- A VPN-like gateway that understands higher-level protocols, like HTTP or PostgreSQL, and can make fine-grained security decisions using the content of those protocols

- A cluster configuration layer on top of Kubernetes

And these two things combine to make, basically, a personal cloud. So, like any of the big cloud platforms, it does a million things and it's hard to figure out which ones you need at first. But it seems like the kind of system that could be used for a homelab, a small company that wants to keep cloud costs down, or a custom PaaS selling cloud functionality. Neat!


TailScale is wonderful but they do need competition. I imagine an IPO is on the horizon, and as soon as they enter that phase, nasty price increases are sure to follow unless someone else is nipping hard at their heels.


Hopefully their tolerance to self-hosters (Headscale) doesn't change.


The individual working on headscale also works for tailscale. And being quite stable and prod ready, even if they pull the plug, a community fork would still keep it alive, given majority of essential features are already there


The problem is, commercial services will always enshittify. It's inevitable. Even when they conquer the whole market (see Netflix) they will want to see a rising line in profits so then they will turn the thumbscrews on the customers.


It’s especially when they conquer the whole market. It’s why investors favor growth and adoption, even at a loss, until it’s won the market and can turn up the monetization dial.


Well, they do it anyway.

All the streaming services are enshittifying, even the smaller ones. And other smaller webshops are enshittifying the same way that Amazon does. Like Cory Doctorow described, there's a few big webshops in the Netherlands like bol.com and coolblue.com and they are now also allowing third party sellers, often even from China. The webshops are absolved of all responsibility but they do cash out on every transaction.


The term 'enshittification' sounds negative for what an organization needs to do to take care of employees.


Sorry no. A stable organization with a good profit margin is enough to take care of employees and pay their salaries. Boundless growth which is what enshittification is associated with, is driven by money-hungry stakeholders and “investors” that demand an ever growing return on investment - they don’t settle for speed, they need constant acceleration.


Isn’t it more of an “all of the above”?

A lot of employees at successful startups & FAANG make most of their money from the stock, no? And they need to buy houses and send their kids to fancy schools too, no? So sure, we can reduce it to stock holders, but I’d bet dollars to donuts the 90% of employees who aren’t posting on hn are at least passively ok with “improving metrics”, and some ambitious ones are driving the enshittification initiatives hard.


IMO the reason devs started being paid in stock in the first place is VC-style grow at all costs mentality. The fundraising economy didn’t work without fabricating compensation and only paying out on hits.

No other industry operates with such a blurred distinction between employees and owners. Well, save for the gig economy, itself a tumor on American-style big tech.


It's the American mentality. More, more, more.

Personally I'd be much happier with a stable income with not much upward mobility but also not much risk of falling downwards. Which is what Europe is geared more towards. I don't constantly want to be in a race. Just to live my life.

If they employees want it, fine but don't be surprised if we customers start finding alternatives. And/or pirating their content (e.g. when it comes to streaming services).

But yeah American companies aren't there to support the employees. The only one they answer to are the owners or large shareholders (whichevery applies), and their only goal is to make those richer. Customers and employees alike are nothing but consumables, a raw resource you only treat right if you can't avoid it.


They seem to be fine with it: "You could alternatively host your own trusted control server with Headscale."[1]

[1] https://tailscale.com/blog/tailnet-lock-ga#self-hosting


They're fine with it now. They won't be, when the next potential revenue source on their list is to crack down on it.

Remember, revenue must always increase, and must always increase faster than the year before (and this is more important than keeping the company alive), so companies always try increasingly desperate measures. Right now they are nowhere near the point of that particular measure. But they will be in the future.



But there are so so many competing products already?

Not all are commercial (but why would you want that anyway). But ZeroTier is another one like that. Basically the same thing.


Yeah, I chose ZeroTier over Tailscale early on. Zero regrets, it’s nearly perfect for my use-case (remote monitoring and management of highly diverse systems and environments).


there is also the chinese EasyTier https://easytier.cn/en/


See Nebula by slack


Netbird is nipping at their heels


I’ve been meaning to explore Netbird. Fewer features at the moment, but can be fully self hosted.


Their mobile android app is awful.


We have just published our android app rework for testing. Mind trying it out? Appreciate the feedback

https://www.reddit.com/r/netbird/s/lRjyehCQFi


I mean the fact headscale exists and is still in decent development, means i doubt it really is an issue, what i'd like to see is an effort for an opensource tailscale client so we could use headscale without the closed source client.


Isn't the client entirely OSS? - https://github.com/tailscale/tailscale


IIRC it’s just the macOS GUI client that’s closed source? I are the CLI client (CLIent?) compiled from source.

EDIT: yep, referenced in your link! They have a very clear page[0] describing what is and isn’t open source.

[0] https://tailscale.com/opensource


Programmable network tunnel fabric.


I really like Mithril.js (https://mithril.js.org/), which is, IMO, as close as it gets to web IMGUI. It looks a lot like React, but rendering happens manually, either on each event or with a manual m.redraw() call.


I think, similar to Preact, Mithril skips the VDOM, which makes it "more immediate" than React.

However, updating the DOM and then turning the DOM to an image (i.e., rendering it) still has an indirection that using canvas/webgl/etc. don't have.


> I think, similar to Preact, Mithril skips the VDOM, which makes it "more immediate" than React.

Both Mithril and Preact use virtual DOMs:

https://mithril.js.org/vnodes.html

https://preactjs.com/tutorial/01-vdom


I find it interesting that this kind of "animal intelligence" is still so far away, while LLMs have become so good at "human intelligence" (language) that they can reliably pass the Turing Test.

I think that the LLMs we have today aren't so much artificial brains as they are artificial brain organs, like the speech center or vision center of a brain. We'd get closer to AGI if we could incorporate them with the rest of a brain, but we still have no idea how to even begin building, say, a motor cortex.


You're absolutely right, and reflecting on it is why the article is horribly wrong. Humans are multimodal—they're ensemble models where many functions are highly localized to specific parts of the hardware. Biologically these faculties are "emergent" only in the sense that (a) they evolved through natural selection and (b) they need to be grown and trained in each human to work properly. They're not at all higher-level phenomena emulated within general-purpose neural circuitry. Even Nature thinks that would be absurdly inefficient!

But accelerationists, like Yudkowskites, are always heavily predisposed to believe in exceptionalism—whether it's of their own brains or someone else's—so it's impossible to stop them from making unhinged generalizations. An expert in Pascal's Mugging[1] could make a fortune by preying on their blind spots.

[1]https://en.wikipedia.org/wiki/Pascal's_mugging


The brain is not a statistical inference machine. In fact humans are terrible at inference. Humans are great a pattern matching and extrapolation (to the extent it produces a number of very noticeable biases). Language and vision is no different.

One of the known biases of the human mind is finding patterns even when there are none. We also compare objects or abstract concept with each other even when the two objects (or concept) have nothing in common. With our human brain we usually compare it to our most advanced consumer technology. Previously this was the telephone, then the digital computer, when I studied psychology we compared our brain to the internet, and now we compare it to large language models. At some future date the comparison to LLMs will sound as silly as the older comparison to telephones does to us.

I actually don‘t believe AGI is possible, we see human intelligence as unique, and if we create anything which approaches it we will simply redefine human intelligence to still be unique. But also I think the quest for AGI is ultimately pointless. We have human brains, we have 8.2 billion of them, why create an artificial version of a something we already have. Telephones, digital computers, the internet, and LLMs are useful for things that the brain is not very good at (well maybe not LLMs; that remains to be seen). Millions of brains can only compute pi to a fraction of the decimal points which a single computer can.


> We have human brains, we have 8.2 billion of them, why create an artificial version of a something we already have.

To circumvent anti-slavery laws.


People are calling LLMs plagiarism machines, so I guess AGI will be called scab machines.


>why create an artificial version of a something we already have

Why build a factory to produce goods more cheaply? Because the rich get richer and become less reliant on the whims of labor. AI is industrialization of knowledge work.


> while LLMs have become so good at "human intelligence" (language) that they can reliably pass the Turing Test

If the LLM overhype has taught me anything, it's the Turing Test is much easier to pass than expected. If you pick the right set of people, anyway.

Turns out a whole lot of people will gladly Clever Hans themselves.

"LLMs are intelligent" / "AGI is coming" is frankly the tech equivalent of chemtrails and jet fuel/steel beams.


This is a great analogy, I totally agree!


Obligatory shameless plug whenever Zod is posted: if you want similar, but much more minimal schema validation at runtime, with a JSON representation, try Spartan Schema: https://github.com/ar-nelson/spartan-schema


When I see a repository with many files "updated 4 years ago" I'm usually inclined to think it's abandoned.


Or the code is done and doesn’t need to be iterated on continuously?


Then say so in the readme :)


Hard to please everyone


A thought I had while reading this: what about putting a flexible membrane above the wheels (or belt) with the dots? This would require the user to press down to feel the dots, but it would remove the issue of fingers or hair getting caught in the wheels.


There is at least going to be a Paper Mario: The Thousand Year Door remake coming out sometime in 2024. I'm hoping it's a sign of more traditional Mario RPGs to come, but who knows. The TTYD remake is something that most of the fanbase thought would never happen, so I guess anything is possible.


I've never been a big fan of Paper Mario, I personally loved the isometric approach Mario RPG took, it really fit the Mario motif better and felt more like a platformer RPG.


Disagree. Mastodon isn't so much a Twitter alternative as a Tumblr alternative with Twitter-like UX. And its value proposition is much the same as Tumblr: curate a collection of interesting discussions and memes from across the network, and in the process find people who like the same kinds of things and build connections with them.

It's not about "number go up" in the same way Twitter or Instagram is because there's no algorithm to give it a feedback loop.

Both Twitter/Instagram and Tumblr/Mastodon are about getting community/attention from strangers by posting what you want and letting them come to you (as opposed to Reddit or forums where you join existing discussions). But the Twitter/Instagram model relies on an algorithm so it only benefits those who are already famous or who invest in gaming the system, whereas Tumblr/Mastodon make it easy for everyone to find their community of a few dozen mutuals without being buried by the algorithm.


It's all ether screaming mate. You scream into a black hole and hope someone (or something) responds. The only reason people continue to do it is because numbers go up. If numbers don't go up, people leave.

I'm not knocking it, I do it too. We are all desperate for connection and have a desire to be heard. We don't even care if it's real or not, we just don't want to feel alone. We are so desperate for connection, in fact, that we will create work that others profit from just so we don't feel alone.

It's why we are talking to each other right now. We could be doing anything else right now, but here we are. Desperate to be heard. Clicking refresh. Wanting to feel validated. Even if one of us is a bot, does it matter? The feeling is real.


I dunno. I've met most of my friends, including a lot of people in my local area, via the Fediverse, long before these recent waves of migration. Numbers are a lot less important than real connection.


Thanks for posting this -- as someone who was never on Tumblr, nor used Mastadon, this still felt like an interesting explanatory facet that helps me understand it better, both in terms of mindset and how people use it.


I've had the opposite experience. On Mastodon I get a small, but consistent, amount of engagement with my posts, and I have over 400 followers. Twitter was like shouting into the void.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: