Also a lot of scientific computing things are very much it works or it doesn't, and once it does it is a reification of some fundamental mathematical algorithm: a black box that should never need to be opened again.
Which is terrible science. Job #1 of good science is reproducability. Crapping out black boxes and claiming "I've proven my theory" in a way no-one else can analyse or reproduce undermines the fundamentals of the scientific method.
I would argue that having two people independently crap out black boxes and comparing them is far more scientific than having one, open box that's never reproduced.
A former co-worker of mine was having trouble understanding the results of her experiment. The simulation software she was using had been the gold-standard implementation for over a decade. The code was clear, well documented, and well engineered. However, my co-worker decided to re-invent the wheel and write her own. The results of her code exactly matched the results of her experiment. Thus, she designed a new experiment and predicted the results with the standard code and her own. After performing that experiment, her simulation was vindicated. It eventually came out that the standard code made assumptions that were invalid in a huge portion of the phase space.
It's important, as a scientist, to be able to perform the same experiment twice and get the same result. However, it's far more important to perform to different experiments and get the same result. If measuring my body temperature a hundred times with the same thermometer isn't nearly as useful as measuring it twice with two different thermometers. Having one piece of code that runs on on a hundred different computers, giving the same result every time, isn't as useful as having two different, independent code bases.
I do my best to make my code maintainable. I have everything up on github. I'm constantly trying to improve the documentation. However, if my code is still being used ten years from now, we have failed as scientists. What should happen is that a new code base should be written that does the same things that my code claims to do. If we get the same results, then great. If we don't, then we find out why.
But that's not happening. There's no plans for an independent re-interpretation. Everyone keeps using my code, because it's clear and it "works". If my code was less maintainable, then that re-implementation would eventually occur and they would be able to check my results. Only then would we truly know if my code works or if it just "works". I'm not going to do that, but I'd understand the reasoning behind it.
i'm referring to cases where you are say, porting or adapting a well understood algorithm. once the algorithm is at parity in terms of inputs and outputs the implementation details are relatively irrelevant from that point forward.