Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In your opinion, when was most scientific research not flawed or "useless"? Remember, the Einsteins, Clarks, etc of the world were wildly exceptional outliers.

Do you really think that if you were to look at most research done between 1897 and 1922, the overall quality would be higher? And that there would be less social factors competing with the integrity of work?



> In your opinion, when was most scientific research not flawed or "useless"?

How long has it been since replicating someone else's study to validate the results was routinely part of the scientific process?

>The replication crisis is an ongoing methodological crisis in which it has been found that the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially of substantial parts of scientific knowledge.

https://en.wikipedia.org/wiki/Replication_crisis


Isn't that an economic problem rather than a scientific one?

Science doesn't exist in a vacuum, it still needs humans to do grunt work and those humans need to be paid, and there isn't much money in replicating an existing study and saying "yep, that's what we thought".


Even new research needs funding. The point is that you can't trust the findings of research to be accurate if it hasn't been proven to be reproducible. When it comes to science it doesn't do any good to fund research into X if you don't actually do the grunt work and part of that work needs to be seeing the initial results carefully reviewed, and then replicated.

Right now, far too often "review" is a rubber stamp and replication never takes place. That's because often science isn't really being done. If you're Tropicana you might happily fund study after study after study tweaking it each time until you get the results you're looking for so that you can get "OJ may reduce risk of cancer" into the headlines, then bury the results of all the research you funded that contradicted that, but that isn't science it's just advertising. In a better world, anyone involved in that kind of shit would be blacklisted as disreputable if not charged with something.

Research that isn't or can never be replicated is just barely better than speculation, and not really worth much of anything. If someone wants to fund science, we should be insisting that the process is actual science and the results are meaningful.


Yeah, I completely agree with all of that.

But again, this is an economic problem. In a perfect world, we'd have an infinite fund for doing science and you couldn't publish a paper until your results were reproduced.

But we live in a capitalist society where incentives are profit-driven (mostly). That's a reality regardless of whether you think it's good or bad.


That's why we need strong regulation and oversight. We know humans are highly vulnerable to greed. We can't (and arguably shouldn't) change that. We can however put measures in place to limit the harm we do to ourselves because of it.


> Isn't that an economic problem rather than a scientific one?

I'll add the other sentence from the first paragraph of the source above.

>Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially of substantial parts of scientific knowledge.


I'm not disagreeing that reproducibility is a problem. Of course it is.

I'm asking how to fix it. How do we incentivise reproducibility?


You have to fund it, simple as that. Tenure is granted to professors who get grants. Grants are gotten by publishing papers. Papers are published by conducting novel research.

One thing we could do is mandate that some portion of all grant money has to be given to independent researchers who will work to confirm your findings. I could imagine some downsides to doing this, but at least it would put money in play.

Also, another problem is that the people doing research are usually grad students working toward a Ph.D.. No one wants to do a Ph.D. confirming someone else's results. You'd need some other workforce to do the work of reproducing research. Again, doable but there needs to be money allocated for this task.


Replication as part of scientific consensus-building dates back at least to the air-pump experiments of 17th century.

See e.g. “Leviathan and the Air-Pump”


I've been wondering the same.


> Do you really think that if you were to look at most research done between 1897 and 1922, the overall quality would be higher?

I suspect yes, because the poor research was simply not be being done. Science was not a career, there was no pressure to publish, it was the pastime of an elite few.


> I suspect yes, because the poor research was simply not be being done. Science was not a career, there was no pressure to publish, it was the pastime of an elite few.

One of the critiques I hear about Academia these days, even in hard sciences, is that it is now significantly more dominated by the sons and daughters of elites (who don't have to take out loans and can afford unpaid research opportunities etc. etc.) So it might be on its way back to that.


N-rays (1903) would be a counter-example, but I can't speak as to the overall statistics of any scientific field that far back.


The exponential growth of spending on research is the main factor working to dilute quality and to corrupt the process.


Not to mention the "publish or perish" ethos that has become such an integral part of promotion & retention policy (and grants) in so many academic institutions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: