Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The example you gave, however, is obvious to every graduate student in any field that touches analysis or asymptotics. That is not the real problem; the real problem is proof by assertion of proof: "Lemma 4.12 is derived by standard techniques as in [3]; so with that lemma in hand, the theorem follows by applying the arguments of Doctorberg [5] to the standard tower of Ermegerds."

Too many papers follow this pattern, especially for the more tedious parts. The two problems are that it makes the lives of students and postdocs torture, and that the experts tend to agree without sufficient scrutiny of the details (and the two problems are intrinsically connected: the surviving students are "trained" to accept those kinds of leaps and proliferate the practice).

Frustratingly, I am often being told that this pattern is necessary because otherwise papers will be enormous---like I am some kind of blithering idiot who doesn't know how easily papers can explode in length. Of course we cannot let papers explode in length, but there are ways to tame the length of papers without resorting to nonsense like the above. For example, the above snippet, abstracted from an actual paper, can be converted to a proof verifiable by postdocs in two short paragraphs with some effort (that I went through).

The real motive behind those objections is that authors would need to take significantly more time to write a proper paper, and even worse, we would need actual editors (gasp!) from journals to perform non-trivial work.



I think Kevin Buzzard et al are aiming for a future where big, complicated proofs not accompanied by code are considered suspicious.

I wonder if being able to drill all the way down on the proof will alleviate much of the torture you mention.


Using vague terms like "obvious" or "standard techniques" is doubtless wrong, but I would not see any problem in a paper basing its conclusions on demonstrations from a source listed in the bibliography, except that in many cases those who read the paper are unable to obtain access to the works from the bibliography.

Even worse is when the bibliography contains abbreviated references, from which it is impossible to determine the title of the quoted book or research paper or the name of the publication in which it was included, and googling does not find any plausible match for those abbreviations.

In such cases it becomes impossible to verify the content of the paper that is read.


This is a wider problem in general and an odd bit of history: scientific papers pre-date the internet, and as such the reference system exists pre-computation. the DX-DOI system is a substantial improvement, but it's not the convention or expectation - and IMO also insufficient.

Realistically, all papers should just be including a list of hashes which can be resolved exactly back to the reference material, with the conventional citation backstopping that the hash and author intent and target all match.

But that runs directly into the problem of how the whole system is commercialized (although at least requiring publishers to hold proofs they have a particular byte-match for a paper on file would be a start).

Basically we have a system which is still formatted around the limitations of physical stacks of A4/Letter size paper, but a scientific corpus which really is too broad to only exist in that format.


The issue with hashing is that it's really tricky to do that with mixed media. Do you hash the text? The figures? What if it's a PDF scan of a different resolution? I think it's a cool idea—but you'd have to figure out how to handle other media, metadata, revisions, etc, etc.


DOIs fix that.


I strongly agree with you, in the sense that many papers provide far fewer details than they should, and reading them is considered something of a hazing ritual for students and postdocs. (I understand that this is more common in specialties other than my own.)

The blog post seems to be asserting a rather extreme point of view, in that even the example I gave is (arguably!) unacceptable to present without any proof. That's what I'm providing a counterpoint to.


> (I understand that this is more common in specialties other than my own.)

True, analytic number theory does have a much better standard of proof, if we disregard some legacy left from Bourgain's early work.


The two problems are that it makes the lives of students and postdocs torture, and that the experts tend to agree without sufficient scrutiny of the details (and the two problems are intrinsically connected: the surviving students are "trained" to accept those kinds of leaps and proliferate the practice).

I think this practice happens in many specialized fields. The thing with math is that the main problem is the publications become inaccessible. But when you have the same sort of thing in a field where the formulations assumptions that aren't formalizable but merely "plausible" (philosophy or economics, say), you have these assumptions introduced into the discussion invisibly.


Not a (former) mathematicien (but as a former PhD student in theoretical physics still some affinity with math): I remember being shocked when, having derived a result using pen and lots of sheets of paper, my co-promotor told me to just write '<expression one> can be rewritten as <expression two>' and only briefly explaining how (nothing more then a line or two), as the magazine we were to publish (and did publish) the article would not want any of those calculations written out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: