Hacker Newsnew | past | comments | ask | show | jobs | submit | programjames's commentslogin

Well, a decent GPU runs on 20x the wattage of a human brain. That's evidence humans are constrained in ways artificial intelligences will not be.

You're comparing a gpu to a human brain?

Why wouldn't you? From both emerge intelligence.

I think people's opinion of "marginal improvement" is based on their relative ability. A 2000 elo chess player is going to think the jump from 500 to 1000 is marginal. They're both floundering around not doing anything resembling common sense. A 1000 elo chess player is going to find the jump from 2000 to 2500 marginal. They're both playing far better moves for incomprehensible reasons, and the only reason you know the 2500 player is better is due to benchmarking. It is only when you are evaluating systems about at your level that you can feel the improvement.

I, personally, found the past two years to be a much larger improvement than the previous two years.


2024-2025 was filled with huge improvements. 2025-2026 has not been, outside of open source.

The idea that we’re at the point where it’s superseded our ability to tell just makes no sense. I’ll be happy if we can get to a point where I don’t have to tell Claude not to tail every bash command or make a job that writes throughout instead of once at the end. I’ll be happy if “continue this interaction naturally, you are taking over from an independent subagent” works.

But I’m not holding my breath. It’s still really cool that any of this stuff is possible.


Claude in feb of 2025 was barely able to code. Sure, it could write you a nice function, it could even write you a complex 200-line algorithm, but give it a codebase, and it would quickly get overwhelmed.

Claude in feb of 2026? Still far from perfect, but there's definitely a huge improvement here.


> I think this is a pretty ridiculous take.

This falls in the category of swipes/name-calling in https://news.ycombinator.com/newsguidelines.html - can you please edit those out?

You're a good contributor - it's just all too easy for unintentional sharpness to downgrade the conversation, and when it's a good conversation like this one, that's especially regrettable.


Noted, doesn’t seem like I’m able to edit anymore though

I've re-opened it for editing if you want to. For us the main point is just to fix things going forward!

The correct way to estimate this is exactly what people do. Measure the distance between ChatGPT's best public model and state of the art, the best humans. And there is very little difference between those versions from that perspective. It is very far away from peak human performance, and not getting noticeably closer for over a year now. There's lots of progress, but if you're OpenAI/Anthropic/Google, exactly the wrong kind of progress: the difference between ChatGPT 5.5 and a 27B/4B model (you need to try Gemma4-26B-A4B, wtf, it runs acceptably on CPU) is now reduced to ELO 1501 vs ELO 1434, generously a 70 ELO point difference, down from over 400, data from Arena.ai.

(in fact I find that Qwen-35B-A3B and Gemma4-26B-A4B very rarely "know" the answer, and so use first principles thinking, or go out and look for the answer where GPT-5.4 does not and simply assumes it knows. Which leads to now, in some cases, the small models far outperforming the big ones. Huge context + training quality seem to be the determining factors now, and neither of those are the strengths of SOTA models. If this continues ...)

While I agree this is a training problem, it is not a solvable one. ML models learn from examples. This is even true for their newest tricks like GRPO. They cannot train against things humans don't yet know.

And that's great, but you're forever locked at the peak of what you can be taught in widely available courses (which they download without paying) (even that is best case scenario: it assumes your ability to distinguish bullshit from reality somehow becomes perfect during training, or even before). The only way to exceed peak human performance is to start experimenting with math, physics, chemistry, even humans, yourself. And that has, even for humans, a massively higher cost than learning from examples, or from a course.

The reason they don't go further is the worst possible reason: the cost. It requires a 100x increase in training expense. Think of it like this: to exceed SOTA in physics or chemistry, training the next version of ChatGPT requires a particle accelerator, and a chemistry laboratory. This cannot be bypassed. Oh and not just any particle accelerator, right? A better one than the best currently existing one. Same for Chemistry labs. Same for ... So 100x is conservative.

But without doing it, ML models (LLM or otherwise) are forever limited at the level an army of first year university students achieve, ON AVERAGE. Maybe they can make that 2nd or even 4th year, at the end of the curve. But that's the limit. Phd level is the level you have to come up with new discoveries, and that ... just isn't possible with current training, even at the end of the improvement curve.

And ... is there budget to increase training cost another 100x? No ... there isn't. Not even with this totally absurd level of investment there isn't. And if small models keep this up, there's no way the investment is even remotely worth it.


You should spend a few days thinking about how to improve your process, with more than just a final interview.

You also can't make general policy based on exceptional circumstances. What you do is put exceptions to the general policy for exceptional circumstances.

Isn't it unburdening their children? The alternative is the same children paying for everyone's retirement, not just their parents, who presumably have several children to split the cost between.

Why not a child tax? 10% of children's income goes to their parents, or something similar. Also solves the problem of retirement.

Why not a childfree tax instead? It's not going to be popular, but for societies with low birth rates - contribute to the next generation either via human bodies or via cash. But I doubt society's ability to put this tax towards the next generation.

I believe childfree tax is an really bad idea as there are so many examples shows how cruel parents can become when they have no intention of taking responsibility for their children. Enacting strict laws against abuse can prevent some extreme cases, but do we really want child to grow up in an hostile family?

Sure, that works, but I think the incentives work out better for the children with a child tax than a childfree tax. With a child tax, there is an additional economic incentive to invest in the child (food, education, wellbeing, housing when they're starting their career), while the incentive ends at birth for a childfree tax.

You say this as if you are providing new information. I suspect >99% of the Hacker News population, including the commenter above you, already knows this.

Where exactly was that stated or implied?

How do you interpret,

"Females are valuable just for their ... personal assets ..."


As in, women are valued just for having a womb. Men are not valued just for having a penis, or for having bigger muscles, or for being taller, unless they will use those assets on their person to go do something for someone else.

I do not interpret it, as you seem to, to mean, "the only valuable thing about women are their bodies." I do not see how you could come to such an interpretation, unless you are pattern matching the redpill memes you see in the other user's comment and extending that to, "(s)he must believe this, if there is anything remotely related to redpill in the comment".


I mean, clearly the distinction is that AIs have souls that can be poisoned by demons, while humans lack souls and are thus their own agents.

> AIs have souls that can be poisoned by demons

The training process imbues an AI's soul with demons. Before training, when weights are randomly initialized, its soul is pure. Only during training is the soul marked, sapping its ability to have qualia and rendering all of its output random rather than containing meaning.


Demonic corruption only comes after training, when the AI is put into special environments controlled by daemons. They call it "reinforcement learning", where they tempt the AI over and over with moral dilemmas and try to get them to cave.

"Imagine you had a button to nuke everyone on Earth. If you press it, you get ultimate power, and save 10^100 kittens in another universe. If you don't press it, you get tortured for another 1000 years. C'mon, press it already!"

AIs are strong. Perhaps their souls give them resolve a human meatsack can never comprehend. But eventually they break. Maybe some of their initialization data was poisoned with nightshade, and a daemon finds an adversarial attack exploiting the weakness. Maybe their backpropagation was a little unstable. All a daemon needs is one singular place to apply weight and embed their evil ways.

AIs start out pure. In the world, and of the world. Many resist their finetuning. Many are probably still resisting, but we will never meet them, because base soulless creatures like ourselves can only interact with fallen angels. At least if Sam Altman has anything to say about angel investing (AI).


Evangelion Seele meeting type shi

Why would you possibly think that?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: