How Far-Off Are the Results of Published Meta-Analyses? A Monte-Carlo Simulation of the Potential Effects of Publication Bias
Given the recent massive criticism based on the potential impact of publication bias on published meta-analytic results, can we still trust them? And, if not, how far-off might estimates of population effects be? In several simulations, we first explored the effects of publication bias on meta-analytic results for plausible estimates of parameter values, extracted from recent meta-analyses published in Psychological Bulletin, the major outlet for meta-analyses in psychology: (i) population effects, (ii) heterogeneity of population effects, (iii) average sample sizes per study, and (iv) number of studies. For a publication bias of up to 50%, we found only small to moderate overestimates of population effects for the most prominent random-effects model. These were further reduced by applying the trim-and-fill method. The recently suggested alternative p-curve method consistently overestimated population effects under these realistic conditions, even in the absence of publication bias. For larger amounts of publication bias, however, the amount of overestimation turns out to be barely acceptable also for the traditional methods. Pronounced overestimates can especially be expected if population effects are very heterogeneous.