Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There was an interesting two-part series on the Freakonomics podcast about academic fraud, and it covered this case.

It's all incredibly depressing. I really feel for all the junior researchers who end up wasting their time, and often derailing their careers because they followed a path based on other people's academic fraud.

https://freakonomics.com/podcast-tag/academic-fraud/



It seems like dishonesty in academia is not punished severely enough. Someone who is caught may face professional embarrassment, and lose some privileges, but I think there should be tougher consequences. Hell, Ranga Dias still seems to be employed. These people are wasting government (our) money which could have been used by honest scientists to further our scientific knowledge of the world. It also further wastes the time of other scientists who may try to build off of the fraudulent research. To me, they are essentially committing fraud in a field where truth-seeking is paramount. I think a trial and prison time needs to be a part of the consequence for flagrant fraud.


A hardline approach is not going to win support from the people on the front lines in the best position to spot and police this. Even the most honest researcher has a published result or three that they suspect is incorrect and they will not trust you to accurately litigate against only the worst players because they are smart cookies and fully understand that honesty is a severe liability when there is an authority out for blood.

No, the better approach here is to just shift the incentives. Start funding replication. Once we see labs and career paths that specialize in replication / knowledge consolidation, the whole system will shift for the better. Bibliometrics and hiring committees will start to pay attention and then exploratory researchers will start to pay attention and the system will start to work a little bit better.


> Start funding replication.

I’m not convinced this fixes anything. Even when a result is genuine, it’s very easy to fail to replicate it. We all know this from software development: it’s a lot easier to say “couldn’t reproduce” about a genuine bug than it is to track down the precise context in which the bug actually manifests. So if you get rewarded for failing to replicate a result, all the fraudsters will do that. If you get funded only when you actually replicate the result, the fraudsters will pretend to replicate the result.


> Even when a result is genuine, it’s very easy to fail to replicate it.

To the extent that is true, then it is itself evidence supporting the proposition that not-yet-replicated results should be regarded as provisional.


> We all know this from software development: it’s a lot easier to say “couldn’t reproduce” about a genuine bug than it is to track down the precise context in which the bug actually manifests.

If a study claims to prove something, it should be repeatedly provable or it's a) fraud or b) not proven solidly enough.

I think replication is a key component of a functional research.


Figuring out why a result is reproducible by some, but not others, is probably where the scientific discovery lies, if there is one to be had.


This makes so much sense. Research is compounding, even for failures to reproduce.


> So if you get rewarded for failing to replicate a result, all the fraudsters will do that. If you get funded only when you actually replicate the result, the fraudsters will pretend to replicate the result.

So reward either? This seems pretty obvious.


It's much easier to fail to replicate a result than to actually try to replicate it.


Yes, but the original author is incentivized to attempt to show where the replicators got it wrong, so there will still be a push to correct the bad data from the false replication failure. With that said, I am convinced it will be a panacea.

A bigger issue is that... 3-sigmas is really a weak signal in a high cardinality state-space, which is basically everything above physics of small numbers of elementary particles, and it is the elementary physicists that go for higher. This is what is feeding the replication crisis: Weak signals from very poorly sampled studies.

The meta issue is that we as a society need to start accepting that some things will take longer and require more investment to achieve results. Do fewer studies per unit grant, but do the three-sigma ones only to justify a real experiment/study, not as an acceptance criteria for "discovery!".

I'm not even going to touch politicalization, since I really have no idea what to do about it without a worse cure than the disease.


> the better approach here is to just shift the incentives.

Yes!

> Start funding replication

No :( That won't solve the problem, it'd just change shape. Funding people who produce fraudulent or incompetent papers won't stop being a problem if you ask them to produce a slightly different kind of paper. You just get fraudulent replications, fraudulent claims of failure to replicate, fraudulent rebuttals to claims of failure to replicate and so on.

The other thing this would do is sharply shift the distribution of papers towards those that are guaranteed to replicate, either because they prove things that are trivially true or because they prove things about simulated worlds that may or may not be connected to reality. Both categories of paper are already way too popular, nobody needs more.

Unfortunately, the incentives fix for this problem is so blindly simple it's also difficult to bring up in polite company, because lots of people have an allergic reaction to the implications. Look at places where science is being done at scale without replication crises or widespread fraud, and notice what's different about the incentives. Then ensure science is always done with those incentives.


Why can we charge other professions with fraud but not researchers? I'm sure people in other professions might worry about being wrongfully prosecuted too, but that doesn't stop us. Even doctors get criminal charges when they deliberately do something wrong. You can be pretty sure that virtually every doctor has made honest mistakes, but they don't all prevent dishonest doctors from being brought to justice.


(IANAL) Fraud can often be prosecuted as a civil crime, in which case there needs to be an aggrieved party with credible evidence of damages, no inhibitions about making a splash, and the time and money resources to sue.

I suspect educational institutions don't want to be seen as organizations that sue their own researchers. Same with paper publishers. And same for grant issuers.

Downstream researchers? Do they have the time, the money? ability to show direct damages whose recompense would be worth all the effort?

Students affected perhaps could, but only if the effect was very direct and they had the resources. And desire to be known as someone who sues their professor.

The damage is usually so diffuse. There is no one party with all the reasons to sue.

I have no idea what the process would be for criminal prosecution, but the diffuse impact may be an inhibitory factor there too.


There are objective standards in other professions. The thing about research is, by definition, whatever you're doing is not part of an established profession yet.

Obviously that doesn't apply in setting like drug development where standards do exist, as defined by the best currently available treatments. But if someone is working on something like psychological studies where replicability is the exception rather than the rule, or on exotic tech where only one experimental facility might exist, or on substances or effects that exist only under weird conditions, it's not always that easy (or that safe) to accuse them of lying. Even when you're pretty sure they are.


I have a hunch that the not everything you do as a researcher is novel. A part of it is, like you mention, new by definition.

But there's also the old and established parts, like statistics, parts of the experimental setup, methodology, reporting data accurately (or at all). This is plenty enough to have objective standards for.

A lot of fraud is not in making up experimental results, but instead misreporting the data and drawing unsupported conclusions.


Exactly! There are some obvious things too like: don't copy and paste tiny bits of an electrophoresis gel and put it into another image to make it look like it was the same result. Sylvain Lesné comes to mind here. Last I checked, this jackass still has a job, too


Agreed. There are plenty of stories in physics where researchers reported some effect which was found later to be due to an error setting up the experiment. Are they supposed to face fraud charges because of this? Researchers will just quit and go work in the industry or something.

And there are plenty of fields where you have different interpretations of the same data (see the entire field of economics, also plenty in physics and other fields). Should the people who espoused ether theory be sued for fraud? It'll be a huge mess because doing research is by definition doing something unprecedented.


If your contention is fraud then good news - we already have the laws and authorities required to pursue it. Nothing new required.


Or derailing their careers because only the very successful grad students and postdocs get grants and tenure, and evidently a good chunk of those elite spots get taken by people who publish dishonest research.


No one should be putting all their eggs in one scientific basket even if the basis ISN'T fraudulent. I see this mistake over and over.


I understand where you are coming from, but if you are starting out on a PhD and decide to do research that is branching off from work done by someone like Gino you could spend a long time chasing ghosts and it will be hard for you to turn round and say "I think this is actually BS".

Even once you are over that hump you will have quite a few years going from one short-term grant to another, with your ability to get funding being dependent on your previous work. If that has been stymied because you were basing it off dodgy research of other people it could take a long time to build up the kind of record where you get to diversify and some any kind of academic security of freedom.


A friend of mine proved (for his PhD I think) that the thing his entire department was working on was based on bullshit. They weren't happy.

That said, putting all your eggs in one basket is often necessary to get anywhere with that basket of research.


What did they disprove? Or at least, what field of study?


I forgot the details, but he studied both mathematics and AI, so something in either of those fields.


Having done a PhD and said 'I think this is BS', it's not easy but possible.


Academic research (especially a PhD) is about going deep into one particular topic. You are fully dependent upon the giants on which you stand.


Yep. "Just stand on the shoulders of TWO giants" is muuuuch easier said than done, lol.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: