Good design speaks to credibility, with some justification. If somebody doesn't care about whether their webpages look good, it suggests that they also didn't care about whether their content was accurate.
Fibre-optic endoscopes have been around since the mid 1970s. So technology that is much older than the muon scanning techniques that the ScanPyramids project used to discover the North Face Corridor. The real problem, I think, was that Hawass went on record to reject the ScanPyramids results for reasons known only to Zahi Hawass, despite widespread scientific acceptance of the ScanPyramids papers.
The other alternate view is that it's run by an incompetent narcissist (Zahi Hawass, head of the Egyptian Antiquities Department that controls who can do what in the pyramids). Hawass was not particularly supportive of the original ScanPyramids Project results.
'On November 2, 2017, the Egyptologist Zahi Hawass told the New York Times: "They found nothing...This paper offers nothing to Egyptology. Zero."' -- wikipedia.
The result from the ScanPyramids Project that got the most coverage was a strong suggestion that there is a major void (the ScanPyramids Big Void) above the Grand Gallery leading to the King's Chamber (which has not yet been confirmed). They also found weaker evidence that suggested there was a tunnel on the north face behind the chevron blocks. The existence of the ScanPyramids North Face Corridor (referred to in the literature as the SP-NFC) was confirmed in 2023, by inserting an endoscope into a crack between two of the chevron blocks. Zahi Hawass did his best to take credit for the discovery. Nobody took him seriously.
Different Nature paper from last year. The article is regurgitating stuff that happened in 2023, with a 2026 byline date.
Your paper, on the other hand, seems to be comparing three different scanning methods (radar, ultrasound, and resistivity) for measuring the thickness of the chevron block in front of the North Face Corridor. So not related.
Interestingly, here in Canada, kids are no longer taught long division. When I was in highschool, we were taught to use slide rules, but not very seriously. And I would imagine, nobody gets taught how to use trigonometry tables anymore (at least I hope so). So, these days, you learn arithmetic very differently because calculators exist.
The point is that if you know the algorithm will produce X as the output if the input is Y, give that as a tool to Claude
And if you know that the previous algorithm completes in Z milliseconds, tell Claude that too and give it a tool (a command it can run) to benchmark its implementation.
This way you don't need to tell it what it did wrong, it'll check itself.
It was the other way around. Claude gave me an algorithm. I found it fishy. So I specifically constructed a counterexample in response to Claude’s algorithm.
Of course when I gave that to Claude, Claude changed the algorithm. But if I didn’t have enough experience and CS fundamentals to find it fishy in the first place, why would I construct a counterexample?
I ran this down, because I have a particular interest in vectorizable function approximations. Particular those that exploit bit-banging to handle range normalization. (Anyone have a good reference for that?)
Regrettably, this is NOT from Hastings 1955. Hastings provides Taylor series and Chebyshev polynomial approximations. The OP's solution is a Pade approximation, which are not covered at all in Hastings.
When you say "this is NOT from Hastings" I had to double check my post again -- I guess you are saying that the Pade approximation is not from Hastings, but the polynomial approximation that the OP referenced from nvidia from A&S and ultimately from Hastings, definitely is in Hastings on page 159 -- I think you were referring to the Pade approximation not being in Hastings, which appears to be true yes. In the article it is interesting that the OP tried taylor expansion and pade approximation, but not the fairly standard "welp lets just fit a Nth order polynomial to the arcsin" which is what Hastings did back in the day.
Doesn't seem to be terribly up to date though. It seems to use almost exclusively taylor series, and seems to be completely uninterested in error analysis of any kind. Unless I'm missing something.
It's a general-purpose reference for mathematicians, not specifically for numerical analysis. Mathematicians are usually interested in the boring old power series centered at zero (Maclaurin series), so that's what gets prominence.
reply