If you have racing thoughts and some magic system responds to you and it's abstract enough (even people on hn do not know how LLMs work, plenty of them) then going for a walk is not enough...
You clearly underestimate the quality of people I have seen and worked with.
And yes guard rails can be added easily.
Security is my only concern and for that we have a team doing only this but that's also just a question of time.
Whatever LLMs ca do today doesn't matter. It matters how fast it progresses and we will see if we still use LLMs in 5 years or agi or some kind of world models.
> You clearly underestimate the quality of people I have seen and worked with.
I'm not sure what you're referring to. I didn't say anything about capabilities of people. If anything, I defend people :-)
> And yes guard rails can be added easily.
Do you mean models can be prevented to do dumb things? I'm not too sure about that, unless a strict software architecture is engineered by humans where LLMs simply write code and implement features. Not everything is web development where we can simply lock filesystems and prod database changes. Software is very complex across the industry.
I know plenty of people who are shittier in writing code than Claude. People with real jobs who are expensive like 50-100k/year.
People whom you have to always handhold and were code review is fundamental.
You can write tests, pr gates etc.
It's still a scale in what you can let them do unsupervised vs controlling them more closely but already better than real people I know. Because they are also a lot faster.
> You clearly underestimate the quality of people I have seen and worked with
"Humans aren't perfect"
This argument always comes up. The existence of stupid / careless / illiterate people in the workplace doesn't excuse spending trillions on computer systems which use more energy than entire countries and are yet unreliable
It shows that a 'llm' can now work on issues like this today and tomorrow it can do even more.
Don't be so ignorant. A few years ago NO ONE could have come up with something so generic as an LLM which will help you to solve this kind of problems and also create text adventures and java code.
Debatable I would argue. It's definitely not 'just a statistical model's and I would argue that the compression into this space fixes potential issues differently than just statistics.
But I'm not a mathematics expert if this is the real official definition I'm fine with it. But are you though?
its a statistical term, a latent variable is one that is either known to exist, or believed to exist, and then estimated.
consider estimating the position of an object from noisy readings. One presumes that position to exist in some sense, and then one can estimate it by combining multiple measurements, increasing positioning resolution.
its any variable that is postulated or known to exist, and for which you run some fitting procedure
I'm disappointed that you had to add the 'metamagical' to your question tbh
It doesn't matter if ai is in a hype cycle or not it doesn't change how a technology works.
Check out the yt videos from 1blue3brown he explains LLMs quite well.
.your first step is the word embedding this vector space represents the relationship between words. Father - grandfather. The vector which makes a father a grandfather is the same vector as mother to grandmother.
You the use these word vectors in the attention layer to create a n dimensional space aka latent space which basically reflects a 'world' the LLM walks through. This makes the 'magic' of LLMs.
Basically a form of compression by having higher dimensions reflecting kind a meaning.
Your brain does the same thing. It can't store pixels so when you go back to some childhood environment like your old room, you remember it in some efficient (brain efficient) way. Like the 'feeling' of it.
That's also the reason why an LLM is not just some statistical parrot.
I saw weird results with Gemini 2.5 Pro when I asked it to provide concrete source code examples matching certain criteria, and to quote the source code it found verbatim. It said it in its response quoted the sources verbatim, but that wasn't true at all—they had been rewritten, still in the style of the project it was quoting from, but otherwise quite different, and without a match in the Git history.
It looked a bit like someone at Google subscribed to a legal theory under which you can avoid copyright infringement if you take a derivative work and apply a mechanical obfuscation to it.
People seem to have this belief, or perhaps just general intuition, that LLMs are a google search on a training set with a fancy language engine on the front end. That's not what they are. The models (almost) self avoid copyright, because they never copy anything in the first place, hence why the model is a dense web of weight connections rather than an orderly bookshelf of copied training data.
Picture yourself contorting your hands under a spotlight to generate a shadow in the shape of a bird. The bird is not in your fingers, despite the shadow of the bird, and the shadow of your hand, looking very similar. Furthermore, your hand-shadow has no idea what a bird is.
For a task like this, I expect the tool to use web searches and sift through the results, similar to what a human would do. Based on progress indicators shown during the process, this is what happens. It's not an offline synthesis purely from training data, something you would get from running a model locally. (At least if we can believe the progress indicators, but who knows.)
While true in general, they do know many things verbatim. For instance, GPT-4 can reproduce the Navy SEAL copypasta word for word with all the misspellings.
Threatening violence*, even in this virtual way and encased in quotation marks, is not allowed here.
Edit: you've been breaking the site guidelines badly in other threads as well. (To pick one example of many: https://news.ycombinator.com/item?id=46601932.) We've asked you many times not to.
I don't want to ban your account because your good contributions are good and I do believe you're well-intentioned. But really, can you please take the intended spirit of this site more to heart and fix this? Because at some point the damage caused by poisonous comments is worse.
* it would be more accurate to say "using violent language as a trope in an argument" - I don't believe in taking comments like this literally, as if they're really threatening violence. Nonetheless you can't post this way to HN.
jQuery's big point was to give a consistent API over inconsistent browser implementations, so it typically saves you from bites more often than it bites you.
jQuery, for as long as it's been around has had very few major releases, 4 now.. and very few breaking changes... hardly "biting" ... other than those sites that are injecting a half dozen different copies of jQuery from different modules, and who knows which one you're actually working with, let alone 3rd party payloads.
I mean, personally, I've mostly used React the past decade and any integration directly to the browser has been straight JS/TS... but I can still see how jQuery can be useful for its' conveniences.
At the end of the day Dilbert was entertainment...
And SA was weird as f who made money bye filling a niche.
His political views and other snippets made that quite clear.
And let's be honest just creating cartoons about our corporate capitalistic shit hole was easy enough hit a nerve but more than a chuckle was Dilbert never.
It became cultural because it was printed everywhere and it was fun enough for it's format
NATO is / was USAs way of controlling Europe to have something against Asia.
It's time for us / Europe to let the USA being whatever and kicking them out
We (Germany) are quite well equipped making guns and tanks.
And btw it was our strategy to try to win over countries by NOT being the big bully but sure Russia and USA made it clear that this no longer works.
I hope USA leaves NATO and we kick them out sooner than later
reply