We read the review article by Christakopoulos et al. The authors analyzed an important clinical question in their review article. However, we are perplexed by the poor methodology and presentation of the results. The authors make a very vague statement about publication bias, “Publication bias was assessed through visual inspection of funnel plots (Begg’s method) which demonstrated some asymmetry for all-cause mortality, myocardial infarction, angina, CABG and MACE suggestive of possible publication bias” and refer readers to the supplemental material to look for answers. This statement has several flaws. The authors equate “some asymmetry” on the funnel plot with publication bias, which is untrue. Asymmetry on the funnel plot can arise from many sources such as heterogeneity, chance, small study effect, artifact due to choice of statistic, and publication bias. On the contrary, funnel plots could be symmetrical in presence of publication or selective reporting bias. The authors plotted variance of log odds ratio against log odds ratio (supplemental material of the article). Although this is acceptable, however plotting standard error of log odd is a preferred method because it provides better visual interpretation of the funnel plot. If variance is more appropriate here, the authors should rationalize it. This review provides 3 statistical tests of publication bias with each funnel plot. The Cochrane’s Handbook recommends that only 1 test be selected a priori and reported based on the nature of data. Interpretation of multiple statistical tests is confusing and undesirable as in this article. The authors themselves did not attempt to interpret the meaning of individual funnel plots and accompanying statistical tests. Begg’s statistical test should be removed as it is not recommended and has lowest power, especially with a small number of studies such as in this meta-analysis. The Peters or Harbord test probably would be a better choice, but all statistical tests should be interpreted in conjunction with other metrics. When <10 studies are included in an analysis, statistical tests are meaningless and should be removed from the article (e.g., TVR, only 5 studies).
Whenever asymmetry is detected on the funnel plot, it requires further investigation with contour-enhanced funnel plots, appropriate statistical tests (when applicable), cumulative meta-analysis, and other methods beyond the scope of this letter. First, reviewers need to assess if asymmetry on the funnel plot could be due to publication bias and then attempt to address it. If unable to address the publication bias, an explanation is warranted. The publication bias arises because statistically significant results showing intervention effect are more likely to be published, and negative results are often suppressed or manipulated in the literature. It differs from the small study effect, which is a result of poor methodology, high risk population, and other local factors.
The authors further stated, “publication bias is possible (higher volume, more experienced centers may be more likely to report their outcomes),” in the study limitations. This statement is difficult to interpret and disconnected from their analysis. They did not delve into the sources of bias in their analysis. The random-effect models were used in the review without proper rationale. Although they appropriate in most analyses here, but fixed-effects analysis should also be reported in some cases such as all-cause mortality. Random-effects analysis assigns more weight to the smaller studies, which could be problematic if true publication bias is present. A true publication bias is usually driven by small studies. It is difficult to make larger studies disappear, and they are more likely to be published notwithstanding negative results. In this scenario, random-effect models could produce biased results. When we ran fixed-effect model for all-cause mortality analysis reported in the review article in question, it indeed produced slightly conservative pooled effect size (odds ratio 0.63, 95% confidence interval 0.58 to 0.70 for fixed effect vs odds ratio 0.52, 95% confidence interval 0.43 to 0.63 for random effect). As it is clear from the reanalysis of all-cause mortality, direction of overall effect will likely remain the same even after accounting for publication bias. The discussion of methods to address publication bias is beyond the scope of this letter.