I briefly mentioned the Jonah Lehrer piece in the December 13 New Yorker on how initial studies tend to show large treatment effects, and subsequent studies show far less impact. Harvard Link NonHarvard Link
The iPad version of this article had graphics from an obscure biology journal which demonstrate this concept especially well.
Imagine a series of experiments with a very small number of experimental subjects in each one. These will have a large confidence interval – some of the experiments will look incredibly positive, and others will look impressively negative. As researchers do experiments with more and more subjects, the confidence interval narrows –and even the most positive or negative experiment yields a result that is much closer to the ‘truth.’
BUT – and this is a very important but – what happens if we only find out about a small portion of the experiments that are completed? That’s not a surprise – the government established a registry of pharmaceutical trials for exactly that reason – the pharmas were only publishing articles that showed success, and were not publishing those which showed that drug didn’t work, or had unexpected side effects.
Here’s the graphic that I found hugely helpful in explaining this phenomenon. This is called a funnel plot and it’s a standard biostatistics tool to evaluate whether there is ‘publication bias,’ and to assess whether a ‘metaanalysis’ combining the results of multiple trials is valid.
The first graphic shows that as sample size increases (right on the horizontal axis), confidence interval decreases and results converge closer to the “truth”
The left side of this graphic represents small trials, and the right side represents trials with more subjects.
The second graphic shows that we would expect that small (“underpowered”) studies with uninteresting results would not be published – hence we should have a funnel plot with a hollow core.
The final graphic shows what happens when there is publication bias. Results that initially look very positive look worse and worse over time, because the results looked so good in the early trials because of randomness alone, since neither the negative nor the neutral trials were not reported.
The implications of this are critical. When you hear about early results showing that anything (new drug, new medical management program, new safety device for your automobile) is absolutely great - wait a while - and you'll often discover that it's not quite as good as it seemed at first. If you see a meta-analysis combining data from many studies -- be sure that the authors have done a funnel plot to assess how severely publication bias might impact the results.
The “Managing Health Care Costs Indicator” will return with the next post. Thanks for hanging with me on a wonky topic.
Here are links to the source article for these graphics: