We read with interest the study of Huang et al examining the risk for cancer associated with the use of angiotensin II receptor blockers (ARBs) in a large observational cohort of Taiwanese subjects with incident hypertension. The study was motivated by a highly publicized meta-analysis of clinical trials that found a small but nominally significant increased risk for cancer in patients randomized to ARBs compared to those not randomized to ARBs (risk ratio 1.08, 95% confidence interval [CI] 1.01 to 1.15), driven largely by an excess risk for lung cancer (risk ratio 1.25, 95% CI 1.05 to 1.49). Contrary to this clinical trial meta-analysis, Huang et al report an impressive approximate 34% reduction in cancer risk associated with the use of ARBs. The reduction in risk was uniform across several co-morbid conditions and all major cancer sites, including lung cancer. The protective effect was substantially more pronounced in patients with >1 year of exposure to ARBs (hazard ratio 0.50, 95% CI 0.46 to 0.53) compared to those with ≤1 year of exposure (hazard ratio 0.79, 95% CI 0.75 to 0.83). The strong duration-response relation suggests a causal association.
Reports of an elevated cancer risk attributed to the use of various classes of antihypertensive drugs have transiently received widespread attention in the past. However, initial concerning reports have invariably been refuted through additional studies. ARBs appear to be following a similar course, with more recent expanded meta-analyses of clinical trials conducted by the United States Food and Drug Administration and others showing no excess risk for cancer in ARB users. Furthermore, a large Danish observational cohort study also detected no increased risk for cancer in ARB users.
How then does one explain the extraordinary protective effects of ARBs on cancer risk observed by Huang et al in light of the totality of evidence coming from other studies? On the basis of the investigators’ declared analytic approach, we strongly suspect that the protective effect observed is merely a consequence of immortal person-time bias.
Bias from immortal person-time was first identified in epidemiologic studies in the 1970s in the setting of cohort studies of the survival benefit of heart transplantation. It recently resurfaced in pharmacoepidemiology, with several observational studies reporting that various medications can be extremely effective at reducing morbidity and mortality. Immortal time is a span of cohort follow-up during which, because of exposure definition, the outcome under study could not occur. In the study by Huang et al, the survival time in the Cox proportional-hazards analyses was calculated from the date of diagnosis of hypertension (time zero) to the date of diagnosis of cancer, but the exposure to an ARB can actually begin days, months, or even years after time zero. Thus, the person-time between time zero and the first ARB prescription is “immortal,” because patients must survive free of cancer to receive their initial ARB prescriptions. Classifying the immortal person-time as exposed when computing the mean survival time of the group exposed to ARBs provides the “treated” group with an artificial advantage over the comparison group. This advantage is magnified when the exposure definition dictates a minimum amount of exposure (as in the subgroup analyses requiring ≥1 year of exposure to ARBs), because all the person-time that makes up the minimum exposure time is also “immortal.”
Immortal time bias in event-based cohorts such as the study by Huang et al can be avoided by ensuring that immortal person-time is not classified as exposed. One can appropriately classify exposure status of this person-time by adopting a Cox model with a time-dependent drug exposure or a Poisson model, as was used in the large null Danish study. We are confident that applying either of these analytic modifications to the study by Huang et al would produce a conclusion more consistent with the prevailing research.