Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I took a look at point 3. and that extract from the code check is correct. Assuming they did one realisation I was curious why. It would be unlikely to be an oversight.

Luckily they published their reasoning on the number of realisations in the supplementary materials of a prior paper cited in report 9 (citation 5): https://www.nature.com/articles/nature04795#MOESM28

"Numbers of realisations & computational resources: It is essential to undertake sufficient realisation to ensure ensemble behaviour of a stochastic is well characterised for any one set of parameter values. For our past work which examined extinction probabilities, this necessitates very large numbers of model realizations being generated. In the current work, only the timing of the initial introduction of virus into a country is potentially highly variable – once case incidence reaches a few hundred cases per day, dynamics are much closer to deterministic."

So looks like they did consider the issue, and the number of realisations needed is dependent on the variable of interest in the model. The code check appears to back their justification up, "Small variations (mostly under 5%) in the numbers were observed between Report 9 and our runs."



The code check shows in their data tables that some variations were 10% or even 25% from the values in Report 9. These are not "small variations", nor would it matter even if they were because it is not OK to present bugs as unimportant measurement noise.

The team's claim that you only need to run it once because the variability was well characterized in the past is also nonsense. They were constantly changing the model. Even if they thought they understood the variance in the output in the past (which they didn't), it was invalidated the moment they changed the model to reflect new data and ideas.

Look, you're trying to justify this without seeming to realize that this is Hacker News. It's a site read mostly by programmers. This team demanded and got incredibly destructive policies on the back of this model, which is garbage. It's the sort of code quality that got Toyota found guilty in court of severe negligence. The fact that academics apparently struggle to understand how serious this is, is by far a faster and better creator of anti-science narratives than anything any blogger could ever write.


I looked at the code check. The one 25% difference is in an intermediate variable (peak beds). The two differences of 10% are 39k deaths vs 43k deaths, and 100k deaths vs 110k deaths. The other differences are less than 5%. I can see why the author of the code check would reach the conclusion he did.

I have given a possible explanation for the variation, that doesn't require buggy code, in my previous comments.

An alternative hypothesis is that it's bug driven, but very competent people (including eminent programmers like John Cormack) seem to have vouched for it on that front. I'd say this puts a high burden of proof on detractors.

https://www.nature.com/articles/d41586-020-01685-y




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: