Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You can define growth rate of a plant this way: define a standardized lab environment [...]

If the standardized lab environment is dry and sunny, then you'll conclude that a cactus grows faster than a fern, since the ferns will mostly shrivel up and die. If the standardized lab environment is moist and shaded, then you'll conclude that the fern grows faster, since cactuses will mostly die for lack of sun. So which is right?

The concept that you're looking for simply doesn't exist--the growth rate of an organism can't be defined except with reference to its environment, which for a virus that infects humans includes human behavior. (What rate of condom use should the standardized lab environment for HIV correspond to? How will you model the increased popularity of fentanyl?)

You are looking for CS-level rigor and simplicity in biology, but biology doesn't work like that. You are correct that many biological results were thus oversold to the public during the pandemic; but you're once again criticizing those public-facing oversimplifications, not any science as a practitioner would understand it.



Neither can be said to be right without reference to a fixed context. If your field standardizes on a lab setup that's drier and sunnier than what ferns like, you'd indeed conclude that ferns grow slower in that context and that's OK because the growth rate is at least well defined. If there's a use case for comparable growth rates in different contexts, OK, define separate names for those rates and measure them separately. Or try to isolate the effect of heat and light such that the growth rate of any plant can be computed from the equivalent of e=mc^2.

Likewise you wouldn't try to measure the infectiousness of HIV in people, clearly. If you want to measure the relative "infectiousness" of viruses in humans using precise numbers then you'd need a controlled experimental environment, presumably something in vitro. That would miss a lot of factors that are important if you're trying to predict epidemics at the society-wide level, but OK, so be it. You need a firm footing of the basics before you can progress to more complex scenarios.

I don't really agree that it's unreasonable to expect CS-level rigor in biology. Microbiologists seem to manage it? It's expected that if two labs sequence the same organism they can in principle get the same DNA sequence, and if they do Xray crystallography on the same protein they'll derive the same structure. So we're not even comparing biology and CS here, we're comparing microbiology with epidemiology. The latter seems to be far closer to a social science in terms of its methods and rigor.

To be clear, it's also fine to do epidemiology using less rigorous methods if it was done in the way it mostly used to be done. When I read papers from the 80s they seemed to be much more appropriate to the actual data quality - largely prose oriented, very limited use of maths, presenting falsifiable hypotheses whilst admitting to the big unknowns. That's fine, science doesn't always have to be precisely quantifiable especially on the margins of what's known. But if scientists do precisely quantify things, then those quantities should be well defined.


> That would miss a lot of factors that are important if you're trying to predict epidemics at the society-wide level, but OK, so be it. You need a firm footing of the basics before you can progress to more complex scenarios.

That's how EE/CS stuff usually works (at least outside ML), building complex systems hierarchically out of well-understood primitives. The life sciences are different. There's almost nothing there we understand well enough to build like that, so almost all results of practical importance (a novel antibiotic, a vaccine, a cultivar of wheat, etc.) are produced by experiment and iteration on the complete system of interest, guided to some extent by our limited theoretical understanding.

This discrepancy has been noted many times; it's just a completely different way of working and thinking. If you haven't, then you might read "Can a biologist fix a radio?".

> Microbiologists seem to manage it?

A grad student in microbiology can grow millions of test organisms in a few days, at the cost of a few dollars, and get all the usual benefits of the central limit theorem. A grad student in epidemiology absolutely can't, since their test organisms are necessarily people. So you're quite correct that it's basically a social science, since it depends on aggregate human behavior in the same way e.g. that economics does, and is therefore just as dismal. Unfortunately it's also the best and only science capable of answering questions of significant practical importance, like whether the hospitals are about to be overrun. I'd tend to agree that stuff like Imperial College's CovidSim has so many parameters and so little ground truth as to have almost no predictive value. R0 seems fine to me though, and usefully well-defined, in the same way that the CAGR of a country's GDP seems fine.

In the life sciences, it's often possible to design an experiment under artificial conditions that will get a repeatable answer, like the growth rate of a plant in a certain controlled environment. It's much more difficult to use the result of such a repeatable experiment for any practical purpose; consider, for example, the steep falloff in drug candidates as they move from in vitro screens (cheap and repeatable, but only weakly predictive) to human trials (predictive by definition, but expensive and noisy). I'm absolutely not a life scientist myself, in part because I think I'd find that maddening; but essentially all results of practical benefit there came from researchers working in that way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: