Hacker Newsnew | past | comments | ask | show | jobs | submit | empiricus's commentslogin

Idea: pass the decompiled code through a "please rename variables according to their purpose" step using a coding agent. Not ideal, but arguably better than v03, v20. And almost zero effort at this time and age.

And have it hallucinate stuff? Nah, this stuff is hard enough without LLMs guessing.

Well, I mean just choosing better names, don't touch the actual code. and you can also add a basic human filtering step if you want. You cannot possible say that "v12" is better than "header.size". I would argue that even hallucinated names are good: you should be able to think "but this position variable is not quite correctly updated, maybe this is not the position", which seems better than "this v12 variable is updated in some complicated way which I will ignore because it has no meaning".

In case the variable describes the number of input files, then v12 is better than header.size. How can you be sure that adding some LLM noise will provide actually accurate names?

i think for obj-c specifically (can’t speak to other langs) i’ve had a great experience. it does make little mistakes but ai oriented approach makes it faster/easier to find areas of interest to analyze or experiment with.

obj-c sendmsg use makes it more similar to understanding minified JS than decompiling static c because it literally calls many methods by string name.


It's a labeling task with benign failure modes, much better suited for an LLM compared to generation

If you ask an LLM to do a statically verifiable task without writing a simple verifier for it, and it hallucinates, that mistake is on you because it's a very quick step to guarantee something like this succeeds.

I mean, step 0 is verifying that the code with changed names actually compiles. But step 1, which is way more difficult, is ensuring that replacing v01 with out_file_idx or whatever, actually gives a more accurate description of the purpose of v01. Otherwise, what's the point of generating names if they have a 10% chance of misleading more than clarifying.

I'd argue that as long as it produced working code it's better than nothing, in this case.

The code is already working with the v01, v02 names though. The use of LLMs here is intended to add information for humans to easier understand what the code does. Which might be worthwhile, but I think this AI upscaling of Obama pretty well illustrates the potential risks of trying to fill in information gaps without a proper understanding of the data https://x.com/Chicken3gg/status/1274314622447820801

But it is very simple. There are some limits to what we can do, based on the laws of physics, but we are so far away from them. And the limiting factor is mostly the fact we are pretty stupid. AI should not have the same limits as us, so it can do more potentially, starting with basic things like cure aging or kill everyone.

This looks like an IQ test, but for who?


For those on wrong side of options contracts expiring? I would guess that this is paper silver being manipulated.


Indeed. There’s a large delta between paper silver and Shanghai physical silver prices right now.


China only has one silver fund (SLV equivalent), and it stopped creating new shares. So the existing shares trade at a large premium to the value of the underlying metal. Is that the "Shanghai physical" price you're talking about?



Trying to read the math behind quantum chemistry, it is never clear to me which parts are fundamental, which parts are tricks, which parts are needed just for close form expressions, which parts are computational approximations, and which are the limitations? For a subject that should be fundamental for future technological advances, and highly dependent on the growth of computation resources, it seems to me exceptionally opaque and I suspect not well presented?


In a nutshell, the only approximation in Hartree Fock is the assumption that the electronic wave function has a very specific form. Namely, that it is a Slater determinant of orbitals, and that each orbital is a linear combination of atomic orbitals from a fixed basis set. The linear coefficients of the orbitals are then solved for via the (exact) variational method.

Of course, the true wave function is generally not a Slater determinant. In particular, electrons in a Slater determinant with different spins are uncorrelated.

The standard approach to resolving this is density functional theory. In that model, the main approximation is the choice of an “exchange correlation functional” which approximates the electron exchange and correlation energy. The choice of a functional is unfortunately a dark art in the sense that they can only be evaluated empirically rather than from first principles.

The classic reference for Hartree Fock is Modern Quantum Chemistry by Szabo and Ostland: https://books.google.com/books/about/Modern_Quantum_Chemistr...

It is very well written and I highly recommend it.

I also wrote up some notes here: https://www.daniellowengrub.com/blog/2025/07/26/scf


Hi, thanks for the recommendations. I looked a little at the book, basically at the end we can compute some properties for small molecules sitting alone in space? What about arbitrary molecules, interacting? Or computing reaction rates? In a solvent? My understanding is that there are some algorithms for all of these, and there is probably progress made, but I never seen (online) anyone complaining that we cannot compute even this basic chemistry. I feel like we should care more about this problem.


From my understanding, accurate simulations at the electron level (post Hartree Fock / DFT) are currently limited to 100 atoms (on a gpu cluster this can take hours or days). Maybe this can be pushed to 1000 atoms with aggressive optimization techniques like FMM.

So at this level of simulation it is currently only possible to simulate one medium size molecule or the interaction of a few small ones.

To simulate larger systems, it is necessary to work at a (semi-)classical level of abstraction that approximates quantum mechanics. For example using molecular dynamics to essentially simulate a fluid with a ball and springs model. In this case, electron level simulation can still be useful for deriving heuristics (conceptually, the spring tension).

I completely agree that it’s interesting to investigate how far the electron level simulation can be pushed.


If electricity is cheap enough, you can take CO2 from air and make fuel (not sure what is the threshold? 5-10 times cheaper then now?). then you can use that fuel where you need its energy density. I agree that it seems pretty dumb to ignore China (and soon India) CO2 emissions. Again, if you manage to make nuclear cheap enough, you could just gift reactors to everyone that needs them. It can be argued that cheap and safe nuclear was not really tried.


I think that is a pretty unrealistic scenario though. Nuclear won't get that cheap.


Well, it is quite difficult indeed, but I am curious what will happen in the next 20 years, with China very interested in this, and some renewed interest in the west too. I am also not sure which is more unrealistic, cheap nuclear or fusion.


Yea, I mean.. the point isn't the price imo. We can build out nuclear and sequester CO2 without it being super cheap. We can do massive projects like that anyway.


Actually when one is old enough, the lethality around becomes much more visible.


This is true and no one thinks about it until they reach 50-70

Either everyone you know dies 1 by 1, or you do.


Not sure how helpful it is, but: Words or concepts are represented as high-dim vectors. At high level, we could say each dimension is another concept like "dog"-ness or "complexity" or "color"-ness. The "a word looks up to how relevant it is to another word" is basically just relevance=distance=vector dot product. and the dot product can be distorted="some directions are more important" for one purpose or another(q/k/v matrixes distort the dot product). softmax is just a form of normalization (all sums to 1 = proper probability). The whole shebang works only because all pieces can be learned by gradient descent, otherwise it would be impossible to implement.


Ok, what would be a vision of humanity you would agree with?


And then we have peace :)


"Bro please just give me the Sudetenland. I swear bro just let me take the rest of Czechoslovakia and I'm done. It's my last territorial demand bro I promise. Just one more annexation and the Treaty of Versailles is fixed. Please bro it's just for the living space."

"Bro please just let me take Kyiv. I swear bro just one more special military operation and the security buffer is complete. It's not a war bro it's denazification. Just give me the Donbas and the land bridge and I'll be chill. One more mobilization and the multipolar world order is saved bro please."

"Bro please just acknowledge the Nine-Dash Line. I swear bro just let me have Taiwan and the great rejuvenation is complete. It's totally an internal matter bro. Just one more island chain and the century of humiliation is over. Please bro just let me cross the strait."

"Bro please just let me bring them freedom. I swear bro just one more regime change and the region is stable. It's about democracy bro it's not about the oil reserves I promise. Just let me install an interim president. Please bro just one more coup."


Yes. Or as Chamberlain put it, "There will be peace in our time."


Is is code a form of AST rewrite rules for optimization? This operation still looks like a incomprehensible wall of code 40 years later (I looked in the past inside the C++ compiler).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: