Definitely an enormous issue! I’m not as familiar with the Long-COVID data, but the issue applies to a lot of other fields/areas.
I argue in the book that one of the most prevalent issues is linked to how researchers are incentivized to publish quickly and often. This means that studies will often tend to be under-powered (i.e., too few participants to show the ‘right’ level of statistical certainty) because it’s time-consuming to include more patients and because it’s often more expensive. The result is, as you point out, that we’re inundated with studies that don’t show much of an effect size, or at least not enough to conclude anything meaningful. Ultimately, this wastes research resources on a systems level, because one big study would have sufficed for fewer resources overall. But because it’s a publishing game, researchers aren’t incentivized to collaborate as much as we’d like from a progress perspective. In the book, I call this ‘artificial progress’, where we think we’ve learnt something new about the world (through the publishing of these studies), but ultimately we’re just misleading ourselves and need to use even more resources to clarify studies that should have been clear from the outset.
One could argue that it should ‘cost’ more for authors to submit under-powered studies to journals, since journals often accept their research despite the methodological flaws, and therefore authors aren’t penalized for this type of behavior. The journals might also prioritize interesting results over the study size being adequate – meaning that too many of these articles get published. Authors and journals ultimately both ‘win’ from this behavior.
I think this issue of sloppy research methods is probably MUCH more prevalent than we think, but I haven’t been able to find reliable sources. In the book I talk about research misconduct and fraud, where some “studies suggest that the true rate of fraud among published studies lies somewhere between 0.01% and 0.4%.” I’d suspect the rate of sloppy research methods to be many times higher than this.
“studies suggest that the true rate of fraud among published studies lies somewhere between 0.01% and 0.4%”. Even 0.4% seems drastically too low—perhaps 10 times too low. I’d be curious to see the source for this claim. An analysis by Elizabeth Bik and others found problematic image duplication in 3.8% of studies. Some of that may have been accidental, but I suspect most were intentional fraud. If ~3.8% percent of papers have this one specific type of fraud, that suggests an even larger percentage contain fraud in general. It’s extremely hard to know, though. I doubt it’s over 10% but I could easily see it being 5%, which is obviously still a massive problem.
Definitely an enormous issue! I’m not as familiar with the Long-COVID data, but the issue applies to a lot of other fields/areas.
I argue in the book that one of the most prevalent issues is linked to how researchers are incentivized to publish quickly and often. This means that studies will often tend to be under-powered (i.e., too few participants to show the ‘right’ level of statistical certainty) because it’s time-consuming to include more patients and because it’s often more expensive. The result is, as you point out, that we’re inundated with studies that don’t show much of an effect size, or at least not enough to conclude anything meaningful. Ultimately, this wastes research resources on a systems level, because one big study would have sufficed for fewer resources overall. But because it’s a publishing game, researchers aren’t incentivized to collaborate as much as we’d like from a progress perspective. In the book, I call this ‘artificial progress’, where we think we’ve learnt something new about the world (through the publishing of these studies), but ultimately we’re just misleading ourselves and need to use even more resources to clarify studies that should have been clear from the outset.
One could argue that it should ‘cost’ more for authors to submit under-powered studies to journals, since journals often accept their research despite the methodological flaws, and therefore authors aren’t penalized for this type of behavior. The journals might also prioritize interesting results over the study size being adequate – meaning that too many of these articles get published. Authors and journals ultimately both ‘win’ from this behavior.
I think this issue of sloppy research methods is probably MUCH more prevalent than we think, but I haven’t been able to find reliable sources. In the book I talk about research misconduct and fraud, where some “studies suggest that the true rate of fraud among published studies lies somewhere between 0.01% and 0.4%.” I’d suspect the rate of sloppy research methods to be many times higher than this.
That makes sense, thank you.
“studies suggest that the true rate of fraud among published studies lies somewhere between 0.01% and 0.4%”. Even 0.4% seems drastically too low—perhaps 10 times too low. I’d be curious to see the source for this claim. An analysis by Elizabeth Bik and others found problematic image duplication in 3.8% of studies. Some of that may have been accidental, but I suspect most were intentional fraud. If ~3.8% percent of papers have this one specific type of fraud, that suggests an even larger percentage contain fraud in general. It’s extremely hard to know, though. I doubt it’s over 10% but I could easily see it being 5%, which is obviously still a massive problem.