Not just R&D. This is how manufacturing works in general. Nobody is like "here's the CAD, go ahead and send me 50,000 units! I'm sure it will be fine."
This says something of culture of hubris that is so easily elevated to leadership positions in America's tech culture. It's nice to see one of these fail-bros get taken down.
But two points remain.
1. Had they been able to sustain this Ponzi scheme to IPO, he would be celebrated as a tech-god, as opposed to facing hard time.
2. The VCs aren't required to disclose their sales of crypto. My guess is they made money of this clown, directly at the expense of all the FTC users he fleeced. As long as that keeps happening, this sort of thing will never stop.
I've always felt this was probably the case. Take an idiot that needs to feel important, tell him he's a "boy genius" who can do no wrong and is always going to be misunderstood, and you basically have the perfect patsy. Till the end SBF just could not let go of that idea that he was smarter than everyone else, never realizing that this was always just a marketing gimmick.
At least Caroline Ellison had the sense to realize that she was living a delusion and quickly did what she could to walk away with minimal harm done to herself. In a few years she'll probably find herself making book deals and getting paid outrageous consulting fees once memory fades enough.
Take an idiot that needs to feel important, tell him he's a "boy genius"
This points to something I've always wondered about SBF: why did anybody think he was a genius?
I know very specific reasons people think Musk, Jobs, Gates, Zuck, and others are geniuses even if I don't agree in every case.
But... SBF? What did he even allegedly do (or preside over) that seemed smart or unique? It was... a crypto exchange. They had their own coin. Uh. Those are not unique things. I don't get it.
But your comment made me think. Maybe it was just an empty bubble of perception. VCs pretended he was a genius so they could use him, and told everybody he was a genius, so everybody thought he was a genius even though nobody could point to why they thought that way.
The crypto crowd needed a genius, and he kinda fit the image, so they went with it. It’s not that different to any other political movement; leaders are made by others, not themselves. It’s natural for us to focus on the person itself as if they are the ones getting all the benefits but all the crypto scammers benefitted from having an “altruistic crypto king” as a media figure.
I know a few MIT grads and they're plenty smart but nobody is calling them geniuses and throwing billions of VC at them, so I'm wondering what was different about SBF.
He had a pedigree, I guess, and made some... money doing the arbitage thing. But also a lot of people do that? There seems to be no real intrinsic reason why this guy would be seen as a star.
I guess it was just kind of a "right guy right place right time thing" thing.
Totally agree. The ferocity of hustling and always be selling on the one hand and dually the reflexive need to hyperbolically market or brand someone as exceptional category X with the implication the bestower also has brilliant insight is gotten stupid. When the currency becomes BS you can bet tomorrow can only bring hyper BS.
I'm reminded of the Johnny Carson line after dealing with crazy entertainment types: I'd give my bottom dollar for a regular person right now.
I'm still reading the fawning biography Going Infinite, and even Michael Lewis thinks there was a lot of "right guy right place right time" involved. Heck, even SBF thought so. Lewis' origin story is that SBF noticed how efficient Jane Street was and how inefficient crypto markets were and assumed that somebody would inevitably bring greater efficiency to crypto, so he thought, "Hey, why can't that somebody be me!?"
Yeah even if one thinks negatively of Jobs, comparing SBF to young Jobs is a joke.
SBF founded FTX in 2019 when he was ~27 years old. At that point he had gone to MIT and done some arbitage stuff. More than I've accomplished but I mean, I'm not really seeing anything that ticketed this guy for stardom.
By age 27, Jobs had founded Apple Computer and was one of the major players in launching the whole personal computing industry.
I mean, even if we feel negatively about Jobs, we can also point to a pretty wide range of accomplishments he helmed. Even outside of Apple. He was one of the founders of Pixar, purchasing it from Lucasfilm. Also famous for a certain brand of charisma or at least persausiveness. One can not think very highly of him, but also see why investors would be like let's give this guy some money, perhaps lots of money.
I don't think highly of Elon Musk but I have zero difficulty understanding why people would think he's a genius sort of guy and give him money. Even I have to admit he's been at the helm for some wild achievements. Also certainly a charismatic guy in some ways. Not going to have a shortage of money thrown at his next venture.
But SBF?
Again... what was anybody seeing in this guy? Must have just been "right place right time" when the crypto hype wave was cresting.
It’s also that he was naive enough to put himself out there as a kind of figurehead of sorts of the crypto movement. So the others were probably like “ok if you insist…”
He was also surfing high on his messiah complex with a kind of 2020’s version of white man burden which the public was enamoured with.
I meant 2020’s as a decade, and the concept is Effective Altruism, the idea that a bunch of geeks sitting in a gaming chair behind a computer screen know better what to do with charity money than the people actually doing it on the real world.
> In a few years she'll probably find herself making book deals and getting paid outrageous consulting fees
Like Jordan Belfort of Wolf of Wall Street infamy, or Andy Fastow of Enron. Why shouldn't she get in on the action? American publishers love a comeback story.
To what extent were SBF and Ellison tech though? This trend of labelling more or less any new company that attracts VC funding as tech is bothersome. It's the mental loophole that allowed Adam Neumann to embezzle huge amounts of investor money too (but he did it "right" so gets away with it), by claiming that WeWork wasn't just an office leasing company but a tech platform. None of these people are engineers of any kind, none of them have developed any kind of new tech. They are just ordinary business types who run a business that happens to employ a few coders.
Crypto is explicitly tech, and a crypto exchange lies at the intersection of finance and tech. It has never been a requirement that leadership of these companies be engineers (much to the dismay of any engineer hired at some of these companies).
It is fair to call FTX and WeWork tech companies because they built platforms, powered by the internet and apps to modernize legacy offline industries. That's been the definition of a tech company since the 90's dotcom boom.
What platform did FTX create? Were there apps you could install into FTX?
FTX was just an ordinary financial company, AFAICT. It had a bit of software to run the order book, but wasn't run by technical people, had no R&D effort and no obvious reason to develop one, and so on.
WeWork was just a company like any other, I can't see a single factor that could make this be a tech company.
To me, at heart, a tech company must develop some new kind of tech and do so in-house.
This is why I think it's strange that Coinbase is considered a crypto company. It is/was a Ruby on Rails web app. Almost like calling the CME Group an agriculture company.
Tesla is a tech company, not a car manufacturer -> common reasoning for Tesla market cap
You already mentioned WeWork. FTX is another example. Given how few "tech" start-ups and big tech companies actually produce technology as their core product, I guess the sentiment of SBF and FTX being somewhat symptomatic is fair.
Even without touching customer funds the FTT token thing seems quite wrong to me.
And based the infamous Matt Levine interview, it was at best a legit ponzi exchange, according to what Sam himself says about the things being traded there.
Well yeah the FTT token thing was a bit iffy if not really a ponzi scheme. Exchange tokens go on - you can still get Binance token or Kucoin token if you like and they tend to work as a sort of equity benefiting from income from the exchange. But not exactly equity and kind of subject to manipulation and the like. Where FTX sinned was treating its FTT tokens as an asset on the balance sheet worth billions when they were not worth that.
The famous Matt Levine interview was about yield farming which as far as I know SBF didn't actually do but which he provided a striking explanation of. While that is Ponzi like it's a bit of a disservice to call it a Ponzi as it's actually rather more cunning and was the basis of most of the 'distributed finance' boom and bust.
It's wild to think of what would have happened if Alameda had gotten out of crypto and instead put all the stolen customer money in an index fund. The CFTC complaint says they started misusing customer funds in 2019. Stocks have increased a lot since then. FTX would have been able to meet any customer request for withdrawal, Alameda would have presented staggeringly large returns (trading profit / legally invested money), and SBF would still live like a king.
I'm somewhat out of the loop on this whole story -- can you explain how the IPO would have helped? What was his exit plan? My understanding of Ponzi schemes is that they always collapse eventually because everyone will want their money back sooner or later.
big infusion of cash from the IPO might have been able to cover up the deficits, and the inevitable "cleaning things up to do the IPO" work could have bought them time to un-fuck a lot of the riskiest accounts.
then the IPO hits, you cash out, take your exit, and drink your mint julip or mojoto in Bermuda while the whole thing burns down 2 years later. whole lotta "not my problem" at that point, and every one would think you're a genius.
I'm not seeing how threatening to sue the anti-defamation league is in and of itself antisemitic? Or did he say something else? I think I remember he said something about Soros once, and Soros conspiracies seem way more antisemitic than saying you want to sue the ADL...
I'm not seeing how threatening to sue the anti-defamation league is in and of itself antisemitic?
The suing? Nah, that's probably not it. But blaming "the Jews" because your advertising revenue is down? Eh, it's gets a little more questionable at that point. In isolation, maybe not. But come...on...given past behavior and everything else going on with xitter, you're willing to give benefit of the doubt? I'm not. My ears might not be what they used to be (thanks, rock n' roll!), but I can still hear dog whistles.
For the mathematically-minded, it is worth noting that the Asian flu had an R naught of 1.7 while Covid had an R naught of a bit more than 2.
Keeping in mind these are fat-tailed distributions. So an R naught of 2 is a estimate of a true R naught, and because using sample averages to estimate means (true R naught is a mean) is unstable (the fatter the tail, the less moments of the distribution there are), once the reported number starts getting upwards of 2, the less confidence you can have that that reported number represents the true R naught. You have know idea if the superspreaders are infecting 50 or 500 people.
At the beginning of the pandemic, we were seeing R naughts estimates at 2.5. That's shockingly high. People were right to take extreme members in the face of that kind of uncertainty.
These people enforcing their values with 20-20 hindsight when the decisions were made given limited data at the time are causing harm to our collective ability to make decisions in emergencies.
For those that are interest in the nuances of modelling infections, I'm going to leave a Wikipedia extract here.
"R_{0} is not a biological constant for a pathogen as it is also affected by other factors such as environmental conditions and the behaviour of the infected population.
R_{0} values are usually estimated from mathematical models, and the estimated values are dependent on the model used and values of other parameters. Thus values given in the literature only make sense in the given context and it is recommended not to use obsolete values or compare values based on different models.
R_{0} does not by itself give an estimate of how fast an infection spreads in the population".
> it is recommended not to ... compare values based on different models
i.e. R0 numbers are worthless. You can't compare these values between pathogens, you can't even compare them between different academic groups studying the same pathogen or between outbreaks of the same pathogen as modeled by different versions of the same software. There's no agreed way to measure this value empirically.
Instead they write a program to simulate the spread of a virus through population and then (in effect) brute force one or more of the inputs until the results of the simulation match what happened so far. These variables are often given names that sorta reflect an intuition about what they're supposed to do, but they're just names. The actual values are arbitrary and there's no effort to ensure the computed values connect to the names in any plausible way.
A funny example of this was a COVID model from University College London that used household size as a free variable. The model used this value to achieve curve fit for each country in the study, so they reported that the average Brit lives with seven other people (the paper has 73 citations).
More seriously, estimates of R0 for SARS-CoV-2 have ranged from 1.5 to nearly 6, maybe even higher. It's not even clear what the definition is because it's not time bounded in any way. So not a value that's really knowable, like the speed of light is.
It would be useful to be able to measure some sort of inherent infectiousness, but R0 as actually defined and computed today is useless. It looks scientific on the surface but everyone who calculates it gets a totally different result, and because there's no actual underlying theory involved (just playing with stats), there's no way to decide which estimate is correct.
To get meaningfully constant values for infectiousness you'd probably need a microbiological / genetic theory that let you derive it from RNA sequences.
> To get meaningfully constant values for infectiousness you'd probably need a theory that let you derive it from RNA sequences.
As others have already noted in this thread, the rate at which a disease spreads inherently depends on both the biology of the pathogen and human behavior. For example, if everyone goes to crowded nightclubs, then R0 for an airborne respiratory disease goes up. If everyone stays home, then it goes down. The biology of the pathogen and human behavior interact in complex, multidimensional ways; promoting condoms will reduce R0 for HIV, but not for influenza. The disease spreads only through human behavior, so the concept of infectiousness independent of that simply doesn't exist.
You are correct that R0 is determined by curve-fitting to actual case counts. It couldn't be otherwise though, since that's the only way to capture the actual human behavior. We expect different R0 in different situations, so the range you see for SARS-CoV-2 isn't a surprise. This is textbook stuff:
> Any factor having the potential to influence the contact rate, including population density (e.g., rural vs. urban), social organization (e.g., integrated vs. segregated), and seasonality (e.g., wet vs. rainy season for vectorborne infections), will ultimately affect R0. Because R0 is a function of the effective contact rate, the value of R0 is a function of human social behavior and organization, as well as the innate biological characteristics of particular pathogens. More than 20 different R0 values (range 5.4–18) were reported for measles in a variety of study areas and periods (22), and a review in 2017 identified feasible measles R0 values of 3.7–203.3 (23). This wide range highlights the potential variability in the value of R0 for an infectious disease event on the basis of local sociobehavioral and environmental circumstances.
As with your confused understanding of PCR specificity, you may have been misinformed by indefensibly oversimplified public messaging as to the definition of R0. (I had a comment deleted from /r/coronavirus in 2020, because they considered my statement that R0 varies with environment and human behavior to be misinformation; the citation didn't help.) The literature is once again available for anyone who wishes to read it, though.
No, I'm not confused about any of this. I'm giving the textbook definition in this thread, right? I'm sure even sure why you are arguing with me instead of the user who started the thread by giving specific R0 numbers for influenza and SARS-CoV-2. As you say, you can't do that because R0s are characteristics of outbreaks, not viruses.
So the confusion is the other way around. An outsider to the field would think R0 would be defined biologically given it's called the "basic reproduction number" and because epidemiologists themselves regularly make claims like "influenza has an R0 of this and measles has an R0 of that", but it takes all of five minutes to discover that the way it's calculated can't support such statements.
That's why as presently defined it's useless. If you can't compare the values with any other value, what are they for? Put another way, claims about R0 aren't falsifiable.
Epidemiology needs to develop far more robust methods that aren't just applying R-the-software to random datasets scraped from the web if it wants to be taken seriously as a field. There are very basic philosophy of science issues here. Argument by textbook gets us nowhere, because the textbooks are themselves written by people engaged in unscientific practices. The expectation by outsiders is reasonable, the actual way things operate isn't.
Therefore, my suggestion - meant constructively! - is to rebase the field on top of microbiological theory. Scrap the models for now. Delete "and everything else" from the R0 definition and come up with an algorithm to compute a measure of infectiousness from DNA/RNA or lab experiments only. Once you've got a base definition that lets different labs replicate each other's numbers, you can start to incorporate other aspects (under new variable names) like immune system strength, population density grids etc.
Such definitions won't let you give governments predictions of hospital bed demand right away, and indeed may not let you calculate much of real world value for a while, but the field isn't able to do that successfully today anyway. COVID models were all well off reality. What it would do though, is put epidemiology on track to one day deliver accurate results based on a firm theoretical foundation.
The suggestion that SARS-CoV-2 spreads faster than influenza because two studies on different populations at different times found R0 of 2.5 vs. 1.7 respectively is indeed false--that difference is obviously within the expected spread from different environments. I thought the top reply to that comment (quoting Wikipedia) clearly implied that, so I didn't think any further effort there was required. You posted other statements that were false in different ways, so I responded to those.
That said, SARS-CoV-2 really does spread faster than influenza; among other reasons, we know this because influenza cases went almost to zero during the pandemic, implying that the same behavior in the same population that clearly resulted in R0 > 1 for SARS-CoV-2 resulted in R0 < 1 for influenza. That's a relatively trivial and obvious claim, but it's falsifiable and it involves R0.
It's pretty common to reduce a time series to a single number. For example, in economics, it's common to look at a compound growth rate per year, averaged over the period of interest. Likewise, in epidemiology, it's common to look at a compound growth rate per estimated serial interval, averaged over the outbreak and corrected for immunity acquired during the outbreak. That's R0, with all the convenience and all the flaws of any other simple aggregate statistic.
> Therefore, my suggestion - meant constructively! - is to rebase the field on top of microbiological theory. Scrap the models for now. Delete "and everything else" from the R0 definition and come up with an algorithm to compute a measure of infectiousness from DNA/RNA or lab experiments only.
I hope you realize that biologists aren't all stupid? If they could somehow define "infectiousness of the pathogen alone, without environmental factors", then that would be incredibly useful, removing all the factors that complicate comparisons of R0. The fact that they've made no attempt to do so should be a clue that the concept that you're wishing for simply doesn't exist.
They do study growth rates in cell culture, or the amount of virus exhaled by a sick lab animal, or the amount of virus that a healthy lab animal must inhale to get infected with some probability. Those are well-defined and somewhat repeatable lab measurements, but they're not very predictive of spread in actual humans. Computational methods are even less predictive; the idea of calculating infectiousness in humans from a viral genome is mostly science fiction for now. They're trying, but this may be harder than you think.
> we know this because influenza cases went almost to zero during the pandemic, implying that the same behavior in the same population
We don't know this no, what you said here is just an assumption. There's a competing hypothesis (viral interference) which seems to explain the data better.
The way epidemiology currently works just cannot tell us which virus is more infectious. I agree that a rebase onto microbiology would be hard and maybe fail but the current approach has already failed. Doing difficult research that might not pan out is how they justify grant funding in the first place.
> There's a competing hypothesis (viral interference) which seems to explain the data better.
How would that explain why SARS-CoV-2 suppressed influenza, instead of influenza suppressing SARS-CoV-2? If viral interference occurs (which I agree it may), then the two simultaneous pandemics are coupled; but you'd still expect the virus with higher R0 to win.
> > it is recommended not to ... compare values based on different models
> i.e. R0 numbers are worthless. You can't compare these values between pathogens
That conclusion doesn't follow from the predicate. The model isn't a single simulation run. To mathematicians, the model is the entire framework. It's the SIR model, or the SIS model, or the SIER model, or my favorite model of immunity decay against pertussis... _those_ are the models. The virulence of the infection being modeled, resilience of the population, morbidity and mortality rates, rate and timing of population quarantine, vaccine adoption rate, vaccine efficacy, whether any of the aforementioned values are themselves functions of time... those are just parameters into your model. We absolutely compare the values between pathogens _within the same model_ when discussing the utility of the model.
But typically, the utility in reproduction ratio is as a point of comparison as you tune other parameters. This is part of why a comparison of reproduction ratio across models isn't recommended. It's not just that it's hard to draw meaning from the comparison, it's that the base assumptions of the models might be too different for the comparison to hold any meaning. The reproductive ratio between a continuous model with uniform population mixing is fundamentally different from the reproductive ratio approximated from discrete simulation. They may both broadly speak to "if everyone they saw was susceptible, how many people do you expect to get sick per sick person?" But what that ratio means is dictated by the context of the model.
A frequent utility in comparing ratios is to discuss intervention impact. One might write "Our model found that overall reproduction was reduced to a factor of 0.XYZ (comprehensive infectivity parameters a=nnn b=mmm c=ppp) when the population undertook vaccination schedule foo. Conversely, reproduction was reduced to a factor of 0.JKL for the same infection under vaccination schedule bar. See Figure 14 for complete population state levels over time."
That's all that is. It's a number that has some meaning in context, and one that is often removed from that context and unreasonably expected to retain its meaning. Averages don't mean as much if you don't know if your distribution is multimodal. It's the same thing - a summary stat that can give you a glimpse of the whole, but still just a summary stat.
I met the guy (Nick Hengartner) who did the 6 (really 5.7) estimate and knew his colleagues and I can tell you the paper was a lot more careful than that.
Want to explain how they computed it to us? Because if R0 is a real thing then either the people estimating R0=1.something were wrong, or Nick was wrong, and so somewhere some methodologies need to change. I read a fair few modelling papers and R0 (under varying names) was always being used by the model as a free variable used to achieve fit to a historical dataset. It was never being established via lab work, as you might expect given its definition.
So they did exactly what we're saying they do? The paper sets up some ad-hoc models unique to this academic group and then reversed them to find values of R0 that could fit. The problem is underdetermined so there are a huge array of values that could work. Key line:
> Overall, we report R0 values are likely be between 4.7 and 6.6 with a CI between 2.8 to 11.3
That's an extremely wide CI by any measure. It's a bit unclear how this is meant to be a contradiction. It looks like a good example of the issue. There is no universal theory or method for computing R0. Every single epidemiologist has their own unique approach which is then often discarded in time for the next paper, making the numbers incomparable.
There's some minor variation due to different curve-fitting approaches, but the big variation (e.g. R0 from 3.7–203.3 for measles, per my other comment here) is real, simply because the environment or human behavior varied. As others have repeatedly noted, R0 is a function not just of the pathogen, but also of its environment, including the behavior of its human hosts.
Earlier, you wrote:
> It [R0] was never being established via lab work, as you might expect given its definition.
If you expected that "lab work" could establish R0, then you've grossly misunderstood its meaning and definition. It seems like you're looking for a concept of "R0 but for the pathogen alone, independent of environment and human behavior". That just doesn't exist though, any more than you could define the growth rate of a plant independent of weather and soil fertility.
Briefly, as the substance is discussed in my other reply:
> the big variation (e.g. R0 from 3.7–203.3 for measles, per my other comment here) is real
Morpheus: "What is real? How do you define, real?"
CIs that wide are just an obfuscated way of saying "we have no idea what's going on or what will happen". Anyone can make predictions that way. For example by the end my life my bank balance will be $4.7 million (CI $5.00-$50M). Those numbers aren't "real" in any meaningful sense. Anyone can express don't-know in sophisticated looking numerical form, and they wouldn't justify me claiming to be a financial expert on the back of them.
> any more than you could define the growth rate of a plant independent of weather and soil fertility.
You can define growth rate of a plant this way: define a standardized lab environment with regularized soil composition and artificial light/watering schedules. Then plant seeds and measure the dry weight at the end of a fixed time period. This will give you a number that's comparable across species. There are other definitions you could use because "growth rate" is slightly ambiguous in English (does a tree grow faster than a weed because the tree achieves bigger mass?), but that's the general idea.
What you shouldn't do is just grab datasets of wildly varying quality off the internet, fit an equation you just invented on the spot to that data, and announce you've discovered something real about the plants in the data. That would indeed yield a growth rate that doesn't tell you anything meaningful.
> You can define growth rate of a plant this way: define a standardized lab environment [...]
If the standardized lab environment is dry and sunny, then you'll conclude that a cactus grows faster than a fern, since the ferns will mostly shrivel up and die. If the standardized lab environment is moist and shaded, then you'll conclude that the fern grows faster, since cactuses will mostly die for lack of sun. So which is right?
The concept that you're looking for simply doesn't exist--the growth rate of an organism can't be defined except with reference to its environment, which for a virus that infects humans includes human behavior. (What rate of condom use should the standardized lab environment for HIV correspond to? How will you model the increased popularity of fentanyl?)
You are looking for CS-level rigor and simplicity in biology, but biology doesn't work like that. You are correct that many biological results were thus oversold to the public during the pandemic; but you're once again criticizing those public-facing oversimplifications, not any science as a practitioner would understand it.
Neither can be said to be right without reference to a fixed context. If your field standardizes on a lab setup that's drier and sunnier than what ferns like, you'd indeed conclude that ferns grow slower in that context and that's OK because the growth rate is at least well defined. If there's a use case for comparable growth rates in different contexts, OK, define separate names for those rates and measure them separately. Or try to isolate the effect of heat and light such that the growth rate of any plant can be computed from the equivalent of e=mc^2.
Likewise you wouldn't try to measure the infectiousness of HIV in people, clearly. If you want to measure the relative "infectiousness" of viruses in humans using precise numbers then you'd need a controlled experimental environment, presumably something in vitro. That would miss a lot of factors that are important if you're trying to predict epidemics at the society-wide level, but OK, so be it. You need a firm footing of the basics before you can progress to more complex scenarios.
I don't really agree that it's unreasonable to expect CS-level rigor in biology. Microbiologists seem to manage it? It's expected that if two labs sequence the same organism they can in principle get the same DNA sequence, and if they do Xray crystallography on the same protein they'll derive the same structure. So we're not even comparing biology and CS here, we're comparing microbiology with epidemiology. The latter seems to be far closer to a social science in terms of its methods and rigor.
To be clear, it's also fine to do epidemiology using less rigorous methods if it was done in the way it mostly used to be done. When I read papers from the 80s they seemed to be much more appropriate to the actual data quality - largely prose oriented, very limited use of maths, presenting falsifiable hypotheses whilst admitting to the big unknowns. That's fine, science doesn't always have to be precisely quantifiable especially on the margins of what's known. But if scientists do precisely quantify things, then those quantities should be well defined.
> That would miss a lot of factors that are important if you're trying to predict epidemics at the society-wide level, but OK, so be it. You need a firm footing of the basics before you can progress to more complex scenarios.
That's how EE/CS stuff usually works (at least outside ML), building complex systems hierarchically out of well-understood primitives. The life sciences are different. There's almost nothing there we understand well enough to build like that, so almost all results of practical importance (a novel antibiotic, a vaccine, a cultivar of wheat, etc.) are produced by experiment and iteration on the complete system of interest, guided to some extent by our limited theoretical understanding.
This discrepancy has been noted many times; it's just a completely different way of working and thinking. If you haven't, then you might read "Can a biologist fix a radio?".
> Microbiologists seem to manage it?
A grad student in microbiology can grow millions of test organisms in a few days, at the cost of a few dollars, and get all the usual benefits of the central limit theorem. A grad student in epidemiology absolutely can't, since their test organisms are necessarily people. So you're quite correct that it's basically a social science, since it depends on aggregate human behavior in the same way e.g. that economics does, and is therefore just as dismal. Unfortunately it's also the best and only science capable of answering questions of significant practical importance, like whether the hospitals are about to be overrun. I'd tend to agree that stuff like Imperial College's CovidSim has so many parameters and so little ground truth as to have almost no predictive value. R0 seems fine to me though, and usefully well-defined, in the same way that the CAGR of a country's GDP seems fine.
In the life sciences, it's often possible to design an experiment under artificial conditions that will get a repeatable answer, like the growth rate of a plant in a certain controlled environment. It's much more difficult to use the result of such a repeatable experiment for any practical purpose; consider, for example, the steep falloff in drug candidates as they move from in vitro screens (cheap and repeatable, but only weakly predictive) to human trials (predictive by definition, but expensive and noisy). I'm absolutely not a life scientist myself, in part because I think I'd find that maddening; but essentially all results of practical benefit there came from researchers working in that way.
The R Naught is not as complicated as it sounds. If you get infected and pass the virus on to one person, and him to one person, and that person to another person, and this pattern persists throughout society, you have an infection rate of 1. If you pass it on to two people, and on down the line, you have an R Naught of 2. And so on it goes. If it falls below one and finally to 0, the pandemic qualifies as endemic.
The infection rate is always conjectural, not really empirical. It’s impossible to discern without universalized, random, and thoroughly accurate testing, tracing, and tracking. Those conditions have never been met in any country or any pandemic. So what seems to be a measure of an existing reality is really true only in theory, not realistically discernible in the midst of a pandemic. At best, it is an estimate.
Masks are ineffective at lowering the R Naught. Social Distancing might.
This is interesting particularly when one considers how modern Chinese society has stretched this norm. It used to be that children are financially responsible for taking care of their aged parents.
Now, due to social pressures, aged parents are expected to literally give all their wealth to their children so they can buy a house, invest in some get-rich-quick scheme, or pay for childcare/education for grandkids. The pressure is even higher to do so for sons, who have trouble attracting wives if they don't have a house (recall there are less women than men in China).
If, given that donation, their children can buy a home, they move in and behave as live-in childcare for grandkids, while their children work 80-hour weeks to pay off their debts.
Under the guise of filial piety and "being taken care of", the elderly are under high pressure to give both their wealth and autonomy to their adult children. That loss of autonomy matters; if you get sick, you have to rely on your children to navigate through and pay for the healthcare. This may include getting approval to spend money from a son-in-law or daughter-in-law who doesn't much like you (despite their filial obligation to take care of you) and is thinking the money is better spent on more tutoring for the grandkids (in the hyper-competitive education system) or paying down the mortgage.
The word is rooted in family relationships. It places a very high value on being blood related and how people are expected to care for their own. Society changed a little bit but the concept remains the same at its very core: it is your blood, you must provide for them if you are able to.
Keep in mind this is study is evaluating the claims that microdosing lead to specific outcomes such as enhanced wellness and cognitive enhancements, which people seek out while microdosing. That's different from using strong doses of psychedelics to, say for example, break entrenched thought patterns (addiction, depression, OCD).
In the former case, a positive outcome is expected and therefore negative outcomes are in a sense less tolerable, especially since a false positive would lead to people repeatedly microdosing over an extended period of time, to their long-term detriment.
In the latter case, a "negative experience" does not preclude getting the desired results. And an acute negative experience in a one-time dose may be tolerable when contrasted with the long-term severity of the pathology it is meant to treat.