Hacker Newsnew | past | comments | ask | show | jobs | submit | heliumtera's commentslogin

>And it’s for the better, I think

A strong reason to use llms today is accessing plain text information without needing to interface with someone else stupid css. You really think the general sentiment around css is: yay things are improving?

another strong reason to use llms: no needing to write css anymore.


I don’t care about the general sentiment when I state my personal opinion. There are definitely people who like CSS and the direction it moves to.

And that being said: the ability to express something in a single CSS directive as opposed to a special incantation in JavaScript is an objective improvement, especially with LLMs.


fair, you pointed well it was you opinion.

general sentiment is quite relevant when discussing standards but maybe it was a mistake to reply to your comment and not address this point in parent


>it’s a negotiation between browser engineers

curious how this works, huh.

seems like the same institutions starving to push browser updates are also authoring standards.

>who need to keep things fast and responsive reality says otherwise. but they definitely need to push updates.


The entirety of it is fake.

What would be the alternative? Seriously, someone believes the model somehow provisioned a ghost vps and decided to participate, long term, on discussions on the web?

My god


The point of Moltbook is that OpenClaw human owners installed skills to participate in Moltbook, not that the bots decided to do that on their own. There’s no denying that it’s stupid and fake though.

Unfortunately there are a lot of people believing this, that is sad.

You need 600gb of VRAM + MEMORY (+ DISK) to fit the model (full) or 240 for the 1b quantized model. Of course this will be slow.

Through moonshot api it is pretty fast (much much much faster than Gemini 3 pro and Claude sonnet, probably faster than Gemini flash), though. To get similar experience they say at least 4xH200.

If you don't mind running it super slow, you still need around 600gb of VRAM + fast RAM.

It's already possible to run 4xH200 in a domestic environment (it would be instantaneous for most tasks, unbelievable speed). It's just very very expensive and probably challenging for most users, manageable/easy for the average hacker news crowd.

Expensive AND hard to source high end GPUs, if you manage to source for the old prices around 200 thousand dollars to get maximum speed I guess, you could probably run decently on a bunch of high end machines, for let's say, 40k (slow).


Please no. Talk is cheap.

I hate this trend of using adjectives to describe systems.

Fast Secure Sandboxed Minimal Reliable Robust Production grade AI ready Let's you _____ Enables you to _____

But somewhat I agree, code is essentially free, you can shit out infinite amounts of code. Unless it's good, then show the code instead. If your code is shit, show the program. If your program is shit, your code is worse, but you still pursing an interesting idea (in your eyes), show the prompt instead of the slop generated. Or even better communicate an elaborate version of the prompt.

>One can no longer know whether such a repository was “vibe”

This is absurd. Simply false, people can spot INSTANTLY when the code is good, see: https://news.ycombinator.com/item?id=46753708


I don't think you're going to see phones with 512gb VRAM+RAM in your lifetime.

When I was a kid I recall my cousin upgrading his computer to 1 or 2 MB so that we could get some extra features when playing Wing Commander 1. That was 1990.

35 years later, burner phones regularly come with 4 GB of RAM these days. 3 order of magnitude difference, not taking into account miniaturization and speed improvements.

In another 35 years who knows what will happen. Yeah things can't improve at the same pace forever but I would be surprised if anyone back in 1990 could predict the level of technology you can get at every corner store today.

Maybe it's not that everyone gets an RTX 5090 in our pocket, but maybe it's that LLMs now can run on rpi. Realistically it's probably something in the middle.


When I was a kid in Elementary we used DOS computers with maybe 4MB of RAM or few MB and the Play Station wasn't many times powerful. A few years (two or three) later we got Windows 95/98 with 128 times more RAM. A few years later, computers could emulate more or less the PSX and the N64, all within six years.

The PlayStation 5 (16GB) has only twice as much RAM as the PlayStation 4 (8GB), and the PlayStation 6 will likely have just 1.5x as much as the PS5: 24GB. And even that might be optimistic with the recent explosion of memory price.

This is a joke right? Not even 10 years ago the first phones with 4GB RAM came out, today there are quite a few phones with 24GB. At that rate we'll be at 512GB by around 2040.

I don't think there are "quite a few" phones with 24GB. For example, even the Samsung Galaxy S25 Ultra, which is one of the most expensive ones out there, only has 12GB DRAM.

Maybe he fell for the 12 + 12 crap they advertise where half the memory is swap

I took it as a comment on the economics of RAM, but I think the current state is transitory (does AI continue apace? Prices will eventually justify more competitors, even at tremendous startup cost. AI crashes? More RAM for the proles)

Phones have as much memory as Android requires, not much more. A low end thinkpad 10 years ago had 8gb memory, and today is same capacity bit more modern and faster. By the same rate we would have a very very fast 8gb memory thinkpad by 2040. Same thing with GPUs. Mid range GPU 10 years ago had 12gb VRAM, mid range AMD GPU last generation (6600xt) had 8gb and 7600xt 16gb, Nvidia 5060 comes at 8gb/16gb.

Phones with 4gb ram is not feasible today because they wouldnt be able to run Android and phone home comfortably, even being a thin client requires running Android and react application on electron. 4gb is not good.

In 2040 phones will came out with the bare minimum to run Android, all the stupid Chinese apps Android distro pushes into consumers, and a react application on electron.


A tech-optimist would perceive this as a death-threat! :,-)

web people opted into react, dude. that says a lot.

they used prisma to handle their database interactions. they preached tRPC and screamed TYPE SAFETY!!!

you really think these guys will ever again touch the keyboard to program? they despise programming.


This. I read this article and it pains me to see the amount of manpower put into doing anything but actually getting work done.

you are telling me that a markdown saying:

*You are the Super Duper Database Master Administrator of the Galaxy*

does not improve the model ability reason about databases?


And we all heard they reverse engineered alien anti gravity technology in the 80s.

All I've heard was that they were aware it's anti gravity. Nothing about reverse engineering.

Care to say more about that?


Bob Lazar claims he was allocated in this project where not only they found working device capable of emiting gravitational waves out of phase with earth gravitational waves, but they achieved the same effect bombarding a mysterious unknown at the time element. He called the material element115 (the logical guess, element 114 properties was known/was synthesized), while emiting one proton and decaying back to element 115 the effect was achieved.

Apparently he was in fact allocated on a top secret project at los Alamos and his expertise was alternative propulsion back everything else is folklore, but it is deep folklore if you're interested in conspiracy theories


There is definitely more to the inability for models to perform well at SRE. One, it is not engineering, it is next token prediction, it is vibes. They could do Site Reliability Vibing or something like that.

When we ask it to generate an image, any image will do it. We couldn't care less. Try to sculpt it, try to rotate it 45 degrees and all hell breaks loose. The image would be rotated but the hair color could change as well. Pure vibes!

When you ask it to refactor your code, any pattern would do it. You could rearrange the code in infinite ways, rename variables in infinite ways without fundamentally breaking logic. You could make as many arbitrary bullshit abstraction and call it good, as people have done it for years with OOP. It does not matter at all, any result would do it in this cases.

When you want to hit an specific gRPC endpoint, you need an specific address and the method expects an specific contract to be honored. This either matches or it doesn't. When you wish the llms could implement a solution that captures specifics syscalls from specifics hosts and send traces to an specific platform, using an specific protocol, consolidating records on a specific bucket...you have one state that satisfy your needs and 100 requirement that needs to necessarily be fulfilled. It either meet all the requirements or it's no good.

It truly is different from Vibing and llms will never be able to do in this. Maybe agents will, depending on the harnesses, on the systems in place, but one model just generate words words words with no care about nothing else


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: