has so far only been posted to the arXiv (and only eight days ago). It has presumably not been subjected to any sort of peer review yet. No third party other than USC has announced the results. There's no chatter among my mathematician friends, or on the blogosphere.
Fokas's results could be correct. If the community comes to a consensus that they are, this would be a tremendous advance, and the analytic number theory community as a whole will be trumpeting them.
But, for the time being, I stipulate that some small technical error is probably lurking in the details, which would take hours to find, and which will tank the proof.
I hope that I am proven wrong. Until then I propose the headline: "Mathematician-M.D. claims to have solved one of the greatest open problems".
I just skimmed through it so maybe I'm missing something, but it also appears to be incomplete. It even says "the completion of the rigorous derivation of the above results will be presented in a companion paper". He brings up equation 1.6 but I don't see a rigorous handling of its asymptotics, nor any precise statement of what the growth rate of zeta on or near the critical line is expected to be. The conclusions section also says about as much. I hope it works out and we will see the conclusion, but I'd really like to see some precise growth rates so we can at least compare it to known bounds and example.
There are earlier revisions of that arxiv submission from 2017. I think if it had introduced an idea that can prove a problem this hard, it would have already been creating buzz. Otherwise I don't know what to make of its submission history.
In my university,my professor told us about the existance of this proof in December,and that he gave a presentation that time.So i guess it has been reviewed thoroughly.
Speaking as an analytic number theorist, the branch of math of which the Lindelof Hypothesis is part, could you tell us what you personally think of the paper?
Structurally this is what my impression is of the paper (I don't understand the math):
The result seemed very well-presented, includes background and a 2-page inroduction, summarizes the derivation of the main result from page 5-12, derives its main theorems and lemmas used, and then from p. 50-52 summarizes it all again. Finally three appendices in 4 pages provide some numerical verification (a sanity check) and an acknowledgment section says "This project would not have been completed without the crucial contribution of Kostis Kalimeris. Kostis has studied extensively the classical techniques for the estimation of single and multiple exponential sums; these techniques are used extensively in our joint paper with Kostis [KF] and some of the results of this paper are used in section 6. Furthermore, Kostis has checked the entire manuscript and has made important contributions to the completion of some of the results presented here." There are another 7 paragraphs of acknowledgments going back more than 3 years, and finally 3 pages of references, including to private correspondence and preprints.
The affiliations on the paper are Department of Applied Mathematics and Theoretical Physics, University of Cambridge, and Viterbi School of Engineering, University of Southern California.
Additionally, this researcher has a proven track record, his Wikipedia article says:
>He has made seminal contributions in a remarkably broad range of areas which include: symmetries, integrable nonlinear PDEs, Painleve' equations and random matrices, models for leukemia and protein folding, electro-magneto-enchephalography, nuclear imaging, and relativistic gravity. Also, he has introduced a completely new method for solving boundary value problems known as the Fokas method, which has been acclaimed as the most important development in the analytical treatment of PDEs since the introduction of the Fourier transform
He is the winner of the Naylor prize (past winners include Roger Penrose, 1991 and Stephen Hawking, 1999, among others) and holds 7 honorary doctorates (right side of his Wikipedia page.)
Now I and other non-mathematicians can look into Euler's totient function and RSA, and learn more. Thanks for answering.
So is there really never a proof in pure mathematics with more obvious or immediate near-term applications?
Are there any historical examples of that? A major breakthrough in mathematics that once understood, it was immediately obvious that it was going to make X work better, and then it did?
Mathematics with immediate payoff is usually called physics or computer science. For example the simplex method for linear programming made a lot of things more efficient.
The maths that has an immediate near-term application usually falls, almost by definition, under applied mathematics (or possibly a related field, such as physics, engineering, computer science etc.).
Pure mathematics essentially concerns itself with the mathematics of mathematics. It's basically the process of trying to either prove certain "empirically discovered" mathematical facts, or understand deeply what those facts mean in a more general way. Understanding this type of mathematics in this deep way allows further mathematics to bud off of that specific
Unless you're talking about improvements in artistic techniques, the art itself doesn't really lead to something more practical, whereas pure mathematics actually does often lead to applications in other fields.
Modern mathematics is so incredibly abstract that I'd be very surprised if more than a minuscule fraction of all published theorems ever had practical applications. If you count that you should also count improvements in for example chemistry (pigments) or the theory of human perception that were initiated by art.
Just pointing out that interestingly Cambridge, where Fokas is a professor, has not released anything.
He is merely visiting USC so it strikes me as weird that they would claim this PR so quickly.
Also Mathematician-MD somehow makes it sound like the MD means he is a lesser mathematician or not a full mathematician. Fokas is a well respected Professor at one of the top applied Maths departments in the world. A better and less biased title would be 'Math Professor' or 'Cambridge math professor' claims..
When you actually read it, being a mathematician and an MD is more impressive, not less. The interesting thing about him is his cross disciplinary knowledge. I don't see the insult.
If you inspect [0], you'll find that his MD was, in fact, in medicine. What came before that, though, was a PhD in applied math from Caltech, and that immediately after receiving his MD he became chair of Clarkson University's Math and Computer Science department.
Mentioning his MD is a distraction and, as the previous poster commented, suggests that he's something of an amateur. This is very far from the case.
So this goes off on a tangent but I feel it relates to noncentrality [0]. Fokas has a PhD in maths. Being an MD or having gotten an MD 40 years ago is clearly entirely non-central to his career. Calling him Mathematician-MD seems like it is meant to make him seem a lesser mathematician, e.g. by insinuating that this is just something he does part time, and that he can hence be taken less seriously.
I don't know what the poster meant by suggesting 'Mathematician-MD', but it reads weirdly to me for that reason. It's highlighting an attribute of a person that is entirely unrelated to his career or this article. Why if not to denigrate him? The title should be changed to neutrally reflect his position.
An MD is a medical degree. It's possible to have an MD but not practice medicine professionally in the same way someone can earn a law degree (a JD) but choose to not be a practicing lawyer. "Professor" is a academic title for people at a university or advanced teaching institution. Most professors do have terminal degrees in their respective fields like PhDs, MDs, or JDs, so you could also call a Professor with a PhD "Dr. X" instead of "Prof. X". But, "Professor" is generally considered a more prestigious title since its much rarer and harder to get a professorship than get an advanced degree.
Hypothesis: As the complexity of proofs approaches the limits of human ability to understand, saying it is a proof becomes more important than proving it is a proof.
Evidence: The Wikipedia page for the Lindelöf hypothesis already unambiguously states that it has been formally proved.
If you check the history and talk of that page you will see that there is one very persistent user who has repeatedly re-added this section while at least to others tried to remove it.
Obviously it doesn't make sense to add all the positive integers (the series doesn't converge), but if you squint and ignore this, and just do the arithmetic a certain way, you get -1/12.
The original definition I gave is valid when s is a complex number with real part greater than 1. But the Riemann zeta function can be proved to have analytic continuation: zeta(s) makes sense for any complex number s, other than 1. For example, zeta(-1) really equals -1/12.
The zeta function is easy to understand when the real part is greater than 1: the formula I described is enough. Because of the so-called functional equation, it is also easy to understand when the real part is less than 0. But it is in the middle that all of its secrets lie. For example, the notoriously unsolved Riemann Hypothesis stipulates that the "nontrivial" zeroes all have real part 1/2.
The Lindelof Hypothesis stipulates that the zeta function grows very slowly along this line (real part = 1/2). It is very closely related to the Riemann Hypothesis. More technical, and of less direct interest to nonspecialists, but in the same family of problems.
As an example of how much mathematicians care about this, here are the Google search results for "subconvexity bound":
A "subconvexity bound" is any result which approaches the Lindelof Hypothesis, for either the Riemann zeta function or a more general "L-function". A lot of ink has been spilled on proving results weaker than what Fokas is claiming.
> Obviously it doesn't make sense to add all the positive integers (the series doesn't converge), but if you squint and ignore this, and just do the arithmetic a certain way, you get -1/12.
I'm not a pure-maths type person but in my experience, if you get one answer by following simple, well understood maths (like "the sum of two positive integers is a positive integer") and another answer by "squinting and ignoring it", this doesn't mean the simple answer is wrong, it means you did something else wrong (like the hidden divide-by-zero present in your typical "proof that 1 = 2"). Paradoxes point to an error in the formulation of the question.
But this isn't just summing up a few positive integers, in which case there's no ambiguity in what the answer is. Once you start summing up infinitely many things you have to bring in some theory and some techniques to justify what the answer is. These techniques generally have a limited scope, but there's a big theory of "divergent series" that shows that if you extend these techniques to more contexts you still get a definition that is compatible with most of what one would want from a limit. For example, taking running averages, or taking it as the coefficients of a series and taking a limit. So classically 1-1+1-1+1... doesn't converge, but if you "squint" and take the running averages of the partial sums (1,0,1,0,...) you would get 1/2. Or if you use the fact that 1+x+x^2+x^3+... = 1/(1-x) and take x=-1 you would get 1/2. Or a myriad other approaches that are perfectly valid with standard convergence, and just all happen to get you that 1-1+1-1+... is 1/2. And yes if you squint very hard you would get that 1+2+4+8+16+... = 1/(1-2)=-1
He's taking the running average of the partial sums:
1-1 = 0
0+1 = 1
1-1 = 0
0+1 = etc etc
The partial sum would end up being equal to the final sum by definition when you're done summing all items. Since we're talking about infinite sequences you're never done summing all items so you'll have to do something else to end up with an answer. for example seeing which way the partial sum trends. In this case it trends solidly in the direction of 1/2
>...except that it doesn't; to me, it even looks more like it trends in the opposite direction, i.e. it is trying to stay away from 1/2.
Like two magnets repelling each other: if you were to hold them together and we call that 1/2 - but they are always trying to push away from each other!
The problem with this first approach is that you can get arbitrary results just be reordering the series. Also, the formula you give for the geometric series only works for |x|<1.
Reordering a divergent series can indeed give you any result you want. This is not reordering though, it’s called Cesàro summation. It won’t always get you an answer, but if it does then the answer is unique and reacts nicely to sums and products.
Talking about it as summation might be misleading since that’s one of those concrete terms that mathematicians like to redefine without anyone’s approval. Picture we have a library that includes many tricks and approaches for taking an infinite series as input and outputs a number. We know it works as expected on every convergent series. But we forgot to put in any preconditions and we’ve let people input things that are not convergent series. But whoa in many cases we are still getting a number out of it, and it’s always the same answer no matter what we do. Maybe that’s something worth studying?
> This is not reordering though, it’s called Cesàro summation. It won’t always get you an answer, but if it does then the answer is unique and reacts nicely to sums and products.
It's also probably worth noting that, if the series is genuinely summable, then Cesàro summation gives its sum.
I wish impendia had left out of his otherwise excellent explanation the "squinting" part and stuck to explaining that the zeta function is defined by the infinite series where that series converges, and defined where that series diverge using a tool called analytic continuation.
In short:
Where the series obviously converge, use the summing formula.
Where the series is mis-behaving, use analytic continuation instead of resorting to weird infinite series re-ordering tricks (which I've always felt to be borderline offensive from a mathematical rigor pov).
My understanding of analytic continuation is that if a function f of the complex plane is sufficiently well behaved on a certain domain of the plane, it can be "extended" to the rest of the plane in a unique way that preserves the well-behavedness.
In the case of zeta, it can be shown that zeta obeys a functional equation that allows it to be extended everywhere.
The thing impendia is referring to there is analytic continuation, which (as mentioned) is a way to extend the domain of a certain set of functions, like the zeta function. It is perfectly rigorous. His/her language was just a short-hand for "don't worry about the details here, but it does work".
Mathematicians aren't stupid, and more than any other profession, they value rigor. They know what they're doing.
Rigor in logic but, it seems to me, not often rigor in description. There's a lot of willingness to say "if we change rule X to mean something totally other, then you can do this thing" and then just describe it as "you can do this thing". Sure, but "this thing" now means something different.
In this case, it's worth pointing out that the zeta function admits a functional definition in which you can define some points of the function about their "reflection" in the line Re(z) = 1/2. This allows a perfectly rigorous definition of the function for values like -1. One might call the functional form more fundamental and note that the series form only holds true for Re(z) > 1.
I disagree with this. Often what is gained from a proof is not the fact that a proof is known, but more insights about the original problem. So, a small technical error may "invalidate" a proof but it does not make it meaningless. Just like a bug in software does not make it worthless.
Not eli5, but a comparison to the Riemann Hypothesis (RH).
RH says the Riemann-zeta function has no zeros along the line (1/2) + iy in the complex plane.
The Lindelof hypothesis says that the number of zeros between (1/2) + iy and (1/2) + i(y+1) is much smaller (little-o) than log(y) as y grows.
So it can be thought of as a weaker version of RH, but still very very difficult. The fact that Lindelof has been an open problem for over a hundred years (and is an non-trivial weakening of RH) speaks to how difficult RH is as well.
Like RH, Lindelof implies things about primes, and also (like RH) has lots of implications about lots of interesting prime-like (irreducible) objects in different spaces.
I think you've flipped the condition: the RH says the Riemann zeta function _only_ has zeros along the line 1/2 + iy. (And, indeed, there are known zeros along this line: 1/2 + 14.135... i.)
The Lindelöf hypothesis is, apparently, equivalent to: the number of zeros with real part greater than 1/2+epsilon and imaginary part between y and y+1 is o(log(y)), for any epsilon > 0. That is, boxes of height 1 starting just off the critical line contain few zeros; the RH implies they contain zero.
I can't point exactly to any particular thing, but I can tell you how to identify them! The magic words are "Assuming the Riemann Hypothesis..."
Any time you see somebody say "We assume RH," or "assuming RH," then any stepping stone to RH makes this assumption more reasonable/likely/anticipated. At this point many working mathematicians will hold opinions like "RH is true" or "RH is true or there's a Siegel zero" so this is kind of like assuming plate tectonics in a seismology paper, or assuming human interference in the atmosphere in a climatology paper.
In cryptography, the only times I can recall having seen it, they meant only "assuming the primes are remarkably well-behaved in their distribution." The primes are empirically remarkably well-behaved, including in the neighborhoods of typical RSA keys. So this is not surprising or unexpected, and improvements on RH should only reinforce our confidence in our empirical techniques.
Nothing because it doesn’t show a method about the distribution of prime numbers but it says that given the Riemann hypothesis is true then the leinhoff problem is true .
Pretty much all of the relationship to practical stuff in papers like this related to the RH are "it gives us information about the prime numbers, and those are used in cryptography".
It's really just about getting people to perk their ears up rather than true implications about crypto.
I think it's because any information we gain about the Riemann Hypothesis (Lindelöf hypothesis is implied by RH) gives us information about the distribution of prime numbers. Any time you gain information about the distribution of prime numbers you immediately gain information that can be applied to any form of cryptography that makes use of prime numbers. You could use this information either to break existing forms of cryptography faster, or apply it to building newer and stronger cryptography.
We already have polynomial-time algorithm for primality testing [1]; not sure if a proof of the Riemann Hypothesis will affect cryptography by that much.
While I agree that a proof of the Riemann hypothesis is unlikely to matter for cryptographical purposes, neither does the AKS primality test. As with many asymptotically efficient algorithms, the constants are simply too large for it to be practical.
No, this is not true. Before this conjecture was proved, you could just assume it was true and see if it would lead anywhere. People already do that with the GRH. If it did have consequences, people would use the resulting algorithm regardless of whether the conjecture was true, because we believe to be almost certainly true. But even if you didn't believe that, you could see if the algorithm was effective by applying it to real world instances.
At the acknowledgement the only big name from the field is Peter Sarnak https://en.wikipedia.org/wiki/Peter_Sarnak. If Peter Sarnak vouch for the result it might be correct.
This is a huge deal, if true. But USC's PR machine seems to have jumped the gun.
The paper in question, found here
https://arxiv.org/pdf/1708.06607.pdf
has so far only been posted to the arXiv (and only eight days ago). It has presumably not been subjected to any sort of peer review yet. No third party other than USC has announced the results. There's no chatter among my mathematician friends, or on the blogosphere.
Fokas's results could be correct. If the community comes to a consensus that they are, this would be a tremendous advance, and the analytic number theory community as a whole will be trumpeting them.
But, for the time being, I stipulate that some small technical error is probably lurking in the details, which would take hours to find, and which will tank the proof.
I hope that I am proven wrong. Until then I propose the headline: "Mathematician-M.D. claims to have solved one of the greatest open problems".