Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I found this earlier today when looking for research and ended up reporting it for citing fake sources. Please correct me if I'm wrong, but I couldn't find "[9] Jongsuk Jung, Jaekyeom Kim, and Hyunwoo J. Choi. Rethinking attention as belief propagation. In International Conference on Machine Learning (ICML), 2022." anywhere else on the internet
 help



Yep, nothing by even a subset of those authors. Closest paper from that Conference:

Rethinking Attention-Model Explainability through Faithfulness Violation Test Yibing Liu, Haoliang Li, Yangyang Guo, Chenqi Kong, Jing Li, Shiqi Wang

https://proceedings.mlr.press/v162/liu22i.html

https://icml.cc/virtual/2022/spotlight/18082


It's "vibe" research. Most of it is basically pure nonsense.

Care to elaborate?

The headline theorem, "every sigmoid transformer is a Bayesian network," is proved by `rfl` [1]. For non-Lean people: `rfl` means "both sides are the same expression." He defines a transformer forward pass, then defines a BP forward pass with the same operations, wraps the weights in a struct called `implicitGraph`, and Lean confirms they match. They match because he wrote them to match.

The repo with a real transformer model (transformer-bp-lean) has 22 axioms and 7 theorems. In Lean, an axiom is something you state without proving. The system takes your word for it. Here the axioms aren't background math, they're the paper's claims:

- "The FFN computes the Bayesian update" [2]. Axiom.

- "Attention routes neighbors correctly" [3]. Axiom.

- "BP converges" [4]. Axiom, with a comment saying it's "not provable in general."

- The no-hallucination corollary [5]. Axiom.

The paper says "formally verified against standard mathematical axioms" about all of these. They are not verified. They are assumed.

The README suggests running `grep -r "sorry"` and finding nothing as proof the code is complete. In Lean, `sorry` means "I haven't proved this" and throws a compiler warning. `axiom` also means "I haven't proved this" but doesn't warn. So the grep returns clean while 22 claims sit unproved. Meanwhile the godel repo has 4 actual sorries [6] anyway, including "logit and sigmoid are inverses," which the paper treats as proven. That same fact appears as an axiom in the other repo [7]. Same hole, two repos, two different ways to leave it open.

Final count across all five repos: 65 axioms, 5 sorries, 149 theorems.

Claude (credited on page 1) turned it into "Not an approximation of it. Not an analogy to it. The computation is belief propagation." Building to a 2-variable toy experiment on 5K parameters presented as the fulfillment of Leibniz's 350-year-old dream. Ending signed by "Calculemus."

[1] https://github.com/gregorycoppola/sigmoid-transformer-lean/b...

[2] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...

[3] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...

[4] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...

[5] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...

[6] https://github.com/gregorycoppola/godel/blob/bc1d138/Godel/O...

[7] https://github.com/gregorycoppola/sigmoid-transformer-lean/b...


Thanks for writing such an elaborate reply! I wish I was familiar with Lean, so I could follow. But if you're right, it would put the claims of the paper in a totally different light.

Perhaps others with knowledge in Lean could also chime in?


Doubtful:

- articles two days old

- I got links right to the code

- its clearly a waste of time if you know Lean, I went way above and beyond already

Maybe if you were able to show "no actually > 0 of this is well founded", someone might be tempted. But you'd need someone who showed up days later, knows enough Lean to validate for you, yet, not enough Lean to know it's a joke just from looking at the links.

You're welcome! Don't mean to be mean (pun intended), hope you don't read it that way. Just, figured it'd give you some food for thought re: exactly how much work you can expect from other people, and that you might need to set more constraints on an "idk, can someone else tell me more?" reaction than "one person said something, but someone else said they're wrong, so score is 1 to 1"


Thanks again - this time I have to admit I really don't get what you're trying to say?!

Sorry, I was unclear!

You said you wished someone with Lean knowledge could check my work. I'm saying: you can check it yourself, right now, without knowing Lean.

Click any of links [2] through [5]. You'll see the word `axiom` in the code. In Lean, `axiom` means "assume this is true without proof." That's it. That's the whole critique. The paper says "formally verified," but the key claims — FFN computes Bayesian update, attention routes correctly, BP converges, no hallucination — are all just assumed.

You don't need to take my word for it, and you don't need a Lean expert to break a tie. The evidence is right there in the links. It'd be like a paper claiming "we formally proved this bridge is safe" and the proof says "Axiom: this bridge is safe." You don't need a civil engineer to tell you that's not a proof.


I suspect it means it's LLM generated without it being checked

The Coefficient of Sketch here feels pretty high: https://xcancel.com/gregcoppola5d



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: