No, I'm not confused about any of this. I'm giving the textbook definition in this thread, right? I'm sure even sure why you are arguing with me instead of the user who started the thread by giving specific R0 numbers for influenza and SARS-CoV-2. As you say, you can't do that because R0s are characteristics of outbreaks, not viruses.
So the confusion is the other way around. An outsider to the field would think R0 would be defined biologically given it's called the "basic reproduction number" and because epidemiologists themselves regularly make claims like "influenza has an R0 of this and measles has an R0 of that", but it takes all of five minutes to discover that the way it's calculated can't support such statements.
That's why as presently defined it's useless. If you can't compare the values with any other value, what are they for? Put another way, claims about R0 aren't falsifiable.
Epidemiology needs to develop far more robust methods that aren't just applying R-the-software to random datasets scraped from the web if it wants to be taken seriously as a field. There are very basic philosophy of science issues here. Argument by textbook gets us nowhere, because the textbooks are themselves written by people engaged in unscientific practices. The expectation by outsiders is reasonable, the actual way things operate isn't.
Therefore, my suggestion - meant constructively! - is to rebase the field on top of microbiological theory. Scrap the models for now. Delete "and everything else" from the R0 definition and come up with an algorithm to compute a measure of infectiousness from DNA/RNA or lab experiments only. Once you've got a base definition that lets different labs replicate each other's numbers, you can start to incorporate other aspects (under new variable names) like immune system strength, population density grids etc.
Such definitions won't let you give governments predictions of hospital bed demand right away, and indeed may not let you calculate much of real world value for a while, but the field isn't able to do that successfully today anyway. COVID models were all well off reality. What it would do though, is put epidemiology on track to one day deliver accurate results based on a firm theoretical foundation.
The suggestion that SARS-CoV-2 spreads faster than influenza because two studies on different populations at different times found R0 of 2.5 vs. 1.7 respectively is indeed false--that difference is obviously within the expected spread from different environments. I thought the top reply to that comment (quoting Wikipedia) clearly implied that, so I didn't think any further effort there was required. You posted other statements that were false in different ways, so I responded to those.
That said, SARS-CoV-2 really does spread faster than influenza; among other reasons, we know this because influenza cases went almost to zero during the pandemic, implying that the same behavior in the same population that clearly resulted in R0 > 1 for SARS-CoV-2 resulted in R0 < 1 for influenza. That's a relatively trivial and obvious claim, but it's falsifiable and it involves R0.
It's pretty common to reduce a time series to a single number. For example, in economics, it's common to look at a compound growth rate per year, averaged over the period of interest. Likewise, in epidemiology, it's common to look at a compound growth rate per estimated serial interval, averaged over the outbreak and corrected for immunity acquired during the outbreak. That's R0, with all the convenience and all the flaws of any other simple aggregate statistic.
> Therefore, my suggestion - meant constructively! - is to rebase the field on top of microbiological theory. Scrap the models for now. Delete "and everything else" from the R0 definition and come up with an algorithm to compute a measure of infectiousness from DNA/RNA or lab experiments only.
I hope you realize that biologists aren't all stupid? If they could somehow define "infectiousness of the pathogen alone, without environmental factors", then that would be incredibly useful, removing all the factors that complicate comparisons of R0. The fact that they've made no attempt to do so should be a clue that the concept that you're wishing for simply doesn't exist.
They do study growth rates in cell culture, or the amount of virus exhaled by a sick lab animal, or the amount of virus that a healthy lab animal must inhale to get infected with some probability. Those are well-defined and somewhat repeatable lab measurements, but they're not very predictive of spread in actual humans. Computational methods are even less predictive; the idea of calculating infectiousness in humans from a viral genome is mostly science fiction for now. They're trying, but this may be harder than you think.
> we know this because influenza cases went almost to zero during the pandemic, implying that the same behavior in the same population
We don't know this no, what you said here is just an assumption. There's a competing hypothesis (viral interference) which seems to explain the data better.
The way epidemiology currently works just cannot tell us which virus is more infectious. I agree that a rebase onto microbiology would be hard and maybe fail but the current approach has already failed. Doing difficult research that might not pan out is how they justify grant funding in the first place.
> There's a competing hypothesis (viral interference) which seems to explain the data better.
How would that explain why SARS-CoV-2 suppressed influenza, instead of influenza suppressing SARS-CoV-2? If viral interference occurs (which I agree it may), then the two simultaneous pandemics are coupled; but you'd still expect the virus with higher R0 to win.
So the confusion is the other way around. An outsider to the field would think R0 would be defined biologically given it's called the "basic reproduction number" and because epidemiologists themselves regularly make claims like "influenza has an R0 of this and measles has an R0 of that", but it takes all of five minutes to discover that the way it's calculated can't support such statements.
That's why as presently defined it's useless. If you can't compare the values with any other value, what are they for? Put another way, claims about R0 aren't falsifiable.
Epidemiology needs to develop far more robust methods that aren't just applying R-the-software to random datasets scraped from the web if it wants to be taken seriously as a field. There are very basic philosophy of science issues here. Argument by textbook gets us nowhere, because the textbooks are themselves written by people engaged in unscientific practices. The expectation by outsiders is reasonable, the actual way things operate isn't.
Therefore, my suggestion - meant constructively! - is to rebase the field on top of microbiological theory. Scrap the models for now. Delete "and everything else" from the R0 definition and come up with an algorithm to compute a measure of infectiousness from DNA/RNA or lab experiments only. Once you've got a base definition that lets different labs replicate each other's numbers, you can start to incorporate other aspects (under new variable names) like immune system strength, population density grids etc.
Such definitions won't let you give governments predictions of hospital bed demand right away, and indeed may not let you calculate much of real world value for a while, but the field isn't able to do that successfully today anyway. COVID models were all well off reality. What it would do though, is put epidemiology on track to one day deliver accurate results based on a firm theoretical foundation.