Principles of Clinical Trials

Chapter 19 Principles of Clinical Trials




Introduction


A clinical trial is a medical study in humans. Typically, these studies are conducted to determine the efficacy and the safety of a drug or medical device. Patients are assigned to one of possibly many prespecified treatments, and their outcomes are recorded.


Clinical trials are different from other types of human studies because the exposure or the intervention is prospective and controlled. For example, if the goal of the study is to determine the relationship between niacin use and high-density lipoprotein (HDL), you can do one of the following:





In items 1 and 2, the intervention, niacin use, is not under the control of the investigators. Clinical factors such as age, smoking status, and other comorbidities as well as HDL levels may affect the physician’s choice to prescribe niacin, so the statement that can be made on the basis of these studies is that niacin use is associated with changes in HDL. In contrast, when the intervention is randomized, factors that could be related to HDL and clinical characteristics that influence the decision to prescribe niacin are ignored. As a result, the statement that niacin use causes changes in HDL is valid. The causal relationship between a test treatment and an outcome is called efficacy when it is measured in clinical trials and effectiveness when it is measured in a routine care setting. RCTs are the gold standard for evidence of the efficacy of a drug or other intervention, and this evidence is used by the U.S. Food and Drug Administration (FDA) to determine whether new medical treatments should be approved.1



Hypothesis Testing


Successful clinical trials begin with a clearly stated hypothesis, or a statement about the effect of an intervention on a population. The design of the study is usually dictated by the hypothesis, which consists of two mutually exclusive statements. Usually, one statement describes a clinically meaningful effect of an intervention (alternative hypothesis), and the other describes harm, no effect, or an effect too small to be clinically meaningful (null hypothesis). In the spirit of scientific inquiry, the goal is not to prove the alternative hypothesis but to disprove the null hypothesis. Continuing with the niacin example, a hypothesis might be:



Note that every possibility of a relationship between niacin use and change in HDL is covered by one of these two statements. Note also that the measurement used to evaluate change in HDL is also defined in the hypothesis: Conclusions will be based on the mean percentage change in HDL.



Types of Comparisons


Hypotheses usually involve demonstrating that one treatment is significantly better than another (superiority) or that one treatment is not meaningfully worse than another treatment (non-inferiority, equivalence) (Figure 19-1). When testing an active drug or treatment compared with placebo, the study should be designed to detect the smallest clinically meaningful improvement as well as the cost and side effects of the drug. Placebo-controlled trials should always be superiority trials if the goal is to test the efficacy of the new drug. Sometimes non-inferiority placebo-controlled trials are used to establish the safety of a new drug.



It is important to determine, a priori, if the goal is to prove superiority or non-inferiority. If the new drug has efficacy comparable with that of an accepted treatment but a secondary property such as side-effect profile or cost is reduced, then demonstration of non-inferiority would likely be acceptable. If a study designed as a non-inferiority trial shows superiority, then the superiority of the new drug can be the stated result. However, it is important to remember that a failed superiority trial is not a non-inferiority trial. Non-inferiority trials are much larger than superiority trials by design. The driving factor that makes non-inferiority trials large is the non-inferiority margin, or the amount that the new drug could differ from the accepted one before claiming that the new drug is not worse than the accepted one.




Primary Outcome


The human phase of development for a drug or intervention is divided into four parts, and each part has unique objectives (Table 19-1). The choice of the outcome measure should be based on the goals of the development phase.


Table 19-1 Human Phase of Development for a Drug or Intervention























PHASE GOALS EXAMPLES OF OUTCOMES
I Maximum tolerated dose, toxicity, and safety profile Pharmacokinetic parameters, adverse events
II Evidence of biological activity Surrogate markers* such as CRP, blood pressure, cholesterol, arterial plaque measure by intravascular ultrasound, hospitalization for CHF
III Evidence of impact on hard clinical endpoints Time to death, myocardial infarction, stroke, hospitalization for CHF, some combination of these events
IV Postmarketing studies collecting additional information from widespread use of drugs or devices Death, myocardial infarction, lead fracture, stroke, heart failure, liver failure

CRP, C-reactive protein; CHF, congestive heart failure.


* A surrogate marker is an event that precedes the clinical event, which is (ideally) in the causal pathway to the clinical event. For example, if a drug is thought to decrease risk of myocardial infarction (MI) by reducing arterial calcification, changes in arterial calcification might be used as a surrogate endpoint because these changes should occur sooner than MI, leading to a reduction in trial time. Hospitalization is a challenging surrogate marker because it is not clearly in the causal pathway; hospitalization does not cause MI, but heart failure worsening to the degree that hospitalization is required is in the causal pathway. When hospitalization is used as a measure of new or worsening disease, it is important that the change in disease status and not just the hospitalization event is captured.



Randomization


Randomization is the process of “randomly” assigning individuals or groups of individuals to one of two or more different treatment options. The term random means that the process is governed by chance. Different trial designs may implement randomization in different ways as will be described below. The simplest design randomly allocates study participants between one of two treatment arms. A particular participant is equally likely to be assigned to one or the other arm.



Why Randomize?


The notion of randomly assigning individual observational units to one treatment modality or another was first discussed by R.A. Fisher in the 1920s.


Randomization, then, tends to even out any differences between the study participants assigned to one treatment arm compared with those to the other. The Coronary Artery Surgery Study (CASS) randomly assigned patients with stable class III angina to initial treatment with bypass surgery or medical therapy.2 If this trial had not been randomized and treatment assignment had been left to the discretion of the enrolling physician, it is likely that these physicians would have selected patients who they believed would be “good surgical candidates” for the bypass surgery arm. This would have led to a comparison of patients who were good surgical candidates receiving bypass surgery with a group of patients who, for one reason or another, were not good surgical candidates receiving medical treatment. It is likely that the medically selected patients would have been sicker, with more comorbidities than the patient selected for surgery. This design would not result in a fair comparison of the two treatment strategies. Randomization levels the playing field.



Intention to Treat


To make randomization work, analysis of RCT data needs to adhere to the principle of “intention to treat.” In its purest form, this means that data from each study participant are analyzed according to the treatment arm to which they were randomized, period. In the case of the Antiarrhythmics Versus Implantable Defibrillators (AVID) trial, this meant that data from patients randomly assigned to the implantable defibrillator arm were analyzed in that arm, whether or not they ever received the device.3 Data from patients randomly assigned to the antiarrhythmic drug treatment arm were analyzed with that arm, even if they had received a defibrillator. This may not make obvious sense, and many trial sponsors have argued that their new treatment just could not show an effect if the patient never received the new treatment. However, the principle of “intention to treat” protects the integrity of the trial by removing a large source of potential bias. In a trial to examine the efficacy of a new antiarrhythmic drug for preventing sudden cardiac death (SCD), for example, a sponsor might be tempted to count events only while the patient was still taking the drug. How, they would argue, could the drug have a benefit if the patient was not taking it? But, if, as has happened, the drug exacerbates congestive heart failure (CHF), then patients assigned to the experimental drug would be likely to discontinue taking the drug. And any subsequent SCD or cardiac arrests would not be attributed to the drug. In fact, it could be argued that the drugs led to a situation where the patient was more likely to die of an arrhythmia.



Stratification and Site Effect


Although randomization is quite effective in evening out differences between populations assigned to one treatment arm versus another, it is not a perfect method. In some instances, important clinical differences have appeared between the two treatment arms. At one interim analysis of the Cardiac Arrhythmia Suppression Trial (CAST) (personal communication), one treatment arm had a markedly lower mean ejection fraction than the other. This difference evened out before the trial was stopped, but if it had not, adjustments would have been required to analyze the data accounting for this very important prognostic difference between the treatment groups.


When an important clinical factor or a potential differential effect of therapy on one group versus another exists, stratification is used to balance the number of patients assigned to each treatment arm within strata. The AVID trial and many other multicenter RCTs are stratified by clinical site. In the case of arrhythmia trials, the skill and experience of the arrhythmia teams or the availability of particular devices at the sites may vary, leading to different outcomes, depending on where the participants are randomized. By stratifying by site, it can be ensured that participants at each site have an equal probability of being assigned to surgical treatment versus medical treatment. Other important differences may vary by clinical site, for example, in surgical trials where the skill and experience of the surgeon can have an effect on the outcome.


Some investigators get carried away with the concept of stratification to try to design randomization in such a way that all possible factors are balanced. It is easy to design a trial with too many strata. Take, for example, a trial stratifying on the basis of ejection fraction at baseline (<30%, 30% to 50%, and >50%) and a required history of myocardial infarction (MI) being “recent,” that is, within the past 6 months, or distant, that is, more than 6 months ago. With this, six strata have been created, a reasonable number for a sample size of, say, 200 or more subjects, randomized to one of two treatment options (Table 19-2). But if the decision is made to stratify by site, with 10 sites, for example, more strata than expected patients would be created. It has been shown that as the number of strata in a conventional randomization design is increased, the probability of imbalances between treatment groups is, in fact, increased as well.4



It is important to adjust for stratification factors in the analysis of clinical trials. Failure to adjust for the “nonrandomness” in the randomization will influence the results.



Types of Randomization Designs


The most commonly used and most straightforward randomization design is a “permuted block” design. Using this method, a trial for any number of treatment arms can be designed and the proportion of patients assigned to each treatment arm can be set. Randomization does not mean that it is necessary to have equal numbers of patients assigned to each arm. In many trials of new drugs, in order to gain information on side effects in early-phase studies, the sponsor may decide to allocate twice as many patients to the new treatment as to the control (a 2 : 1 allocation).


Permuted block randomization can best be described as constructing small decks of cards, shuffling the cards, and then dealing them out. For a design with two treatment options, two decks with two types of cards (say, hearts and clubs) would be created. For equal allocation, each deck would contain an equal number of hearts and clubs. The size of each deck, or block, would depend on stratification and other factors. The deck is shuffled and as each patient is randomized, the next card is dealt. When all of the cards in the deck have been dealt, a new deck is shuffled and the process is repeated. The size of each deck needs to be an even multiple of the number of treatment arms and can vary over the course of the randomization sequence. In actual practice, the size of the decks is determined in advance, and the decks are shuffled in advance.


Permuted block designs can lead to the same problems as with too many strata. For this reason, adaptive randomization is sometimes used. Several types of adaptive designs exist. Basically, the next randomization assignment is dependent, in some way, on the characteristics of the patients who have been randomized so far. Baseline adaptive techniques adjust the probabilities of assigning the next patient to one treatment arm or the other on the basis of the baseline characteristics of the patient compared with other patients already randomized.5 In the study design described above where randomization will be stratified by the class of angina, recent or distant history of MI, and ejection fraction, the objective is to keep a balance of treatment assignments within each stratum. So, as each patient is randomized, the randomization algorithm will look at the existing balance in that stratum and assign treatment on the basis of a biased coin toss.



Blinding or Masking Therapy


Ideally, all clinical trials should be double-blind or double-masked studies. That is, neither the patient receiving the treatment nor the medical staff treating the patient should have knowledge of the patient’s treatment assignment. In this way, bias in outcome assessment can be minimized. If the patient is aware of receiving the experimental treatment, he or she may be more likely to report side effects than those who believe that they are receiving placebo or do not know which treatment they are receiving. Similarly, an investigator who knows that the patient has received the experimental treatment may be more likely to see a benefit than if the investigator believes that the patient is receiving a placebo.


Ethically and logistically, many trials cannot be conducted as double-blind trials. For example, trials involving surgical intervention for only one of the study arms cannot, in almost all cases, be conducted as a double-blind study. In a single-blind trial, the patient is unaware of the treatment assignment, but the treating physician is aware of the assignment. Trials of pacemakers might be conducted in a single-blind fashion where participants know that they have a pacemaker but are unaware of the programming mode.


Since the purpose of blinding or masking of therapy is to minimize bias in outcome assessments, a strategy to minimize bias in an unblinded trial is to make use of a blinded event assessor. The Stroke Prevention in Atrial Fibrillation studies used a neurologist unassociated with the routine care of the patient to evaluate the patient in the event that symptoms of stroke were reported. The blinded event assessor was presented with medical records masked to therapy, in this case warfarin versus aspirin. The blinded event assessor evaluated the patient and drafted a narrative based on his or her clinical findings.


In a triple-blind study, the patient, the treating medical staff, and the data coordinating center are all masked to individual treatment assignments. Where the data coordination is being provided by a commercial sponsor, the sponsor can elect to remain blinded to interim study results because of the apparent conflict of interests in making decisions regarding study endpoints. Clinical trials rely on scientific equipoise. As soon as a trend favoring one of the treatment assignments is evident, sponsors and clinicians may make decisions different from what they would make if they had no knowledge of the emerging trend. Early trends often do not pan out, and the experiment can become compromised. It is advisable, whenever possible, to keep the sponsor blinded.

Stay updated, free articles. Join our Telegram channel

Aug 12, 2016 | Posted by in CARDIOLOGY | Comments Off on Principles of Clinical Trials

Full access? Get Clinical Tree

Get Clinical Tree app for offline access