Hacker Newsnew | past | comments | ask | show | jobs | submit | qsera's commentslogin

>non-scientific worldview, brainwashing

This can be good, you know. I mean that was the original purpose of religion.

The idea is that everyone will be good if they are afraid of judgement day. But science came along and took that away. But science (or should I say naive "scientists") did not substitute it with something that works as well. Not even close. It didn't even try.


>This can be good, you know

No, it's not. Non-factual, non-evidence based worldview is part of the problem humanity has right now in the post-fact era.

>The idea is that everyone will be good if they are afraid of judgement day

I reject the notion that people can be good just because they are afraid of some powerful entity judging them. People are good because it's the right and rational thing to do. If they aren't good now, the environment is to blame which made them bad people.

>... "scientists") did not substitute it with something that works as well. Not even close. It didn't even try.

It's not the job of science to make sure people don't do bad things. Science can point to a problem, it's us, the people, who need to solve the problem.


> right and rational

Even you seem to agree that there is a notion of a "right" thing.

A "Rational" action can totally depend on what you want to achieve. And also considering the fact that "rationality" is not equally distributed among the people, it follows that there need to be some kind of gospel that needs to be followed so that everyone will do things that are collectively beneficial...

>It's not the job of science..

Isn't the ultimate goal of science the betterment of human condition? If you agree that, I think it is indeed the job of science to suggest a proper replacement for the stuff it is overthrowing...


Science exposes reality. If people aren't fit to deal with reality, and need imaginary entities that they need to worship for it to make sense, then these people need to work on themselves first and foremost, instead of screaming that the painted door on the wall that science removed traps them in a room. Their worldview was error prone to begin with.

>knowing how to give AI good context.

It is really ironic that humans are now trying to find patterns in the behavior of a program that works by finding patterns in human behavior.


LLM does not "reason".

This should be clear by the fact that it can solve complex math problems without understanding how to count.


>for the sake of argument, that context can express everything weights can...

Does this imply that a completely untrained model (random weights) should show intelligent behavior only by providing enough context?


Nope. Even if context can theoretically encode arbitrary computation under fixed weights, this requires the weights to implement a usable interpreter. Random weights almost surely do not. Training is what constructs that interpreter. Without it context has no meaningful computational semantics.

It's kind of like asking if I make a random circuit with logic gates, does that become a universal computer that can run programs.


That was exactly what I was thinking. So it is a bit unclear why such a possibility should be even considered.

To be fair, I didn't really understand what idea this article is trying to get across..


There has been a lot of talk about how continual learning might be "just and engineering challenge" and that we could have agents that continuously learn from experience by just having longer and longer context windows.

Here is a clip of Dario hinting at something similar: https://www.youtube.com/watch?v=Z0x99Uu4rJc

What I am trying to argue for in the article is how such a view might be misplaced - just extending the context length and adding more instructions in the context will not get you continual learning - the representational capacity of weights will be the limiting factor.

Just a fun way to think about it. Would love to hear your thoughts.


>just extending the context length and adding more instructions in the context will not get you continual learning...

I agree. But I am wondering if context would help in answering superficial questions and only fail when answering questions that require deeper understanding.


I'd say the way to think about it is in terms of the questions you ask being in-distribution or out of distribution w.r.t the model training dataset.

Consider this, if something fundamental has changed in the world after the model was released(ie after the knowledge cut off date), then it would be very difficult for the model to reason about it. One concrete example is the the following: If you ask Opus or any decent coding model to do effort estimation on a coding task, then it would come up with multi week timelines - the models themselves doesn't know that because "they exist", these timelines have now been slashed to a few hours - you can try saying this in the prompt, however, they don't seem to internalise this.


So basically that is what I was saying.

Imagine an LLM that can also OCR. Would it be possible to make it OCR a totally new letter by only showing a single picture of it and including the fact in the context?

I think it would not be possible. That would be a good demonstration of the point I (and possibly you as well) is trying to get across.


>natural world unravel

Natural world would be mostly fine one way or other, human beings might not survive though...


No, we're very resilient bastards, we're going to let the huge majority of species go extinct before we go ourselves. We're already in a mass extinction event and we're just getting started.

I think we can talk about intelligence without introducing consciousness.

So there are two questions

1. Is LLMs really showing intelligent behavior. 2. Is that real intelligence.

I think even to the first question, the answer is no. Because really intelligent behavior that can solve complex math problems should be also be able to do basic things like counting, without any extra mechanisms. Intelligent behavior would understand low level things, and building understanding of higher level things using them. When an entity shows "understanding of higher level things, without showing an understanding of some of the low level things involved, then it becomes clear that the entity is not intelligent.

So this should be proof enough to identify that LLMs does not in fact show intelligent behavior. It is just some trick. An illusion. So we can reach this conclusions even without consider the implementation and training of LLMs, which We would only have to consider it to answer the second question, which was not required because it failed the first one.


Ads? Where we are going, we won't need Ads.

People seem to be missing the fact that businesses won't need ads anymore.

It would be like pharmas gifting doctors and practitioners to prescribe their products. Those are not Ads.

With LLMs the every business can do it. People "consult" LLMs like they used to "consult" doctors and thus would be forced to obey what ever it suggest. Just like right now people are forced to obey what a doctor prescribes.

If there is implicit trust for LLMS as there is implicit trust for doctors, then it is game over for conventional ads.


You'll have free LLMs with baked in ads, or subscription-based LLMs. Most will go for the former.

Of course they will when the subscription changes what you've paid for daily/weekly and it just gets much more expensive each month. That's a sensible rejection of being messed around

you're onto something, why pay for ads if i can pay a post-ads agency to ensure maximal product placement during training.

>The absorb command will do a lot of this for you by identifying which downstream mutable commit each line or hunk of your current commit belong in and automatically squashing them down for you. This feels like magic every time I use it (and not the evil black box black magic kind of magic where nothing can be understood), and it’s one of the core pieces of Jujutsu’s functionality that make the megamerge workflow so seamless.

IUUC This is already implemented for git as an extension. https://github.com/tummychow/git-absorb

I think this is such a basic thing that should be part of any DVCS implementation.


I've worked with git using the mega-merge approach, and one thing I found is that git-absorb won't merge commits into anything that precedes a merge. It works fine for absorbing changes into earlier commits on a feature branch, but not from the WIP branch back into the multiple feature branches that are the parents of the mega-merge. jj handles this with no problems.

From this comment on the git-absorb issue tracker I wouldn't expect it to be fixed soon either: https://github.com/tummychow/git-absorb/issues/134#issuecomm...


Something really magical about “Distributed Version Control System” sharing an acronym with “Disney Vacation Club Services”.

> The unit of work is no longer “branches” or “commits,”

It better be, now and going forward for people who use LLMs..because they will need it when LLM messes up and have to figure out, manually, how to resolve.

You ll need all the help (not to mention luck) you need then..


I am wondering why not just rsyncrypt the source code before pushing to the repo?

>rsyncrypto is a utility that encrypts a file (or a directory structure) in a way that ensures that local changes to the plain text file will result in local changes to the cipher text file. This, in turn, ensures that doing rsync to synchronize the encrypted files to another machine will have only a small impact on rsync's wire efficiency.

https://manpages.ubuntu.com/manpages/focal/man1/rsyncrypto.1...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: