The multimodality management of inoperable non–small cell lung cancer (NSCLC) – from early-stage disease to locally advanced presentations – has changed substantially over the past 20 years. With the exception of small, stage I NSCLC, most nonmetastatic patients are treated with both local and systemic therapy, and there is an increasing interest in the integration of all three modalities. Radiation oncology has experienced dramatic technologic innovations over this time, which has allowed for reduced toxicity and consequent improvements in the therapeutic ratio, and in the notable case of stage I NSCLC, significantly higher cure rates.
The purpose of this chapter is to summarize the state of contemporary thoracic radiotherapy as it is used in the management of nonmetastatic, inoperable NSCLC. Inherent to this discussion are the relevant improvements in radiation therapy planning and delivery, which will be outlined in the beginning of this chapter. The remarkably improved efficacy of stereotactic body radiotherapy (SBRT) for stage I NSCLC will then be described, and this discussion will conclude with a focus on locally advanced lung cancer, highlighting three key and controversial issues that are central to the development of a treatment plan: use of chemotherapy, total radiotherapy dose, and the volume of tissue irradiated.
In the early era of radiotherapy, radiation planning was based on external anatomic landmarks and simple measurements of patient thickness.1 These plans were obviously crude, but the large field size presumably made up for inaccuracies in treatment planning. By the 1960s, fluoroscopic simulators, which emulated treatment machine geometry, were developed commercially, allowing radiation oncologists to design fields based on bony anatomy. Radiation planning was performed in two dimensions following the fluoroscopic simulation, in which plain radiographs were taken in the treatment position. The external contour of the patient was modeled at the isocenter of the field, and relevant internal structures were drawn on the contour by the physician, including the target and critical normal organs. The appropriate location of these structures was determined by their anatomic relationship with bony anatomy. Although the visualization of bony anatomy allowed radiation fields to become more complex, they were still fundamentally limited by an inability to know the three-dimensional (3D) location of the tumor and surrounding normal structures.
The 1970s witnessed the dawn of axial imaging, as computed tomography (CT) and magnetic resonance imaging (MRI) were developed and introduced into medical care.1 As soon as CT was developed, it became obvious to radiation oncologists and physicists that the technology could revolutionize radiation planning.2 First, the anatomic detail dramatically improved the physician’s knowledge of tumor extent, and theoretically this information would lead to better target coverage. Second, the 3D dataset would allow the radiation planner to create a substantially more sophisticated beam arrangement, using computerized dosimetry and a “beam’s eye-view” to optimally cover the tumor and avoid normal structures.3 Multiple research groups attempted to merge CT technology with radiotherapy planning, and by 1996, several commercial 3D planning systems became available, bringing conformal radiotherapy (CRT) into general practice.4 Radiotherapy plans created by these systems are termed 3D-conformal radiotherapy (3D-CRT), in contrast to the plans calculated from fluoroscopic simulators, simply termed 2D.5
As detailed earlier, the bedrock of modern thoracic radiotherapy is the CT simulation, in which patients are first immobilized and then CT is performed, ideally with intravenous contrast. The immobilization is often performed using a thick plastic bag with beads inside, and a vacuum is created within the bag once the patient is in the appropriate position; the bag is thus held in position once the vacuum is applied. The CT is typically obtained in 2.5- to 3-mm thick slices, though thinner or thicker slices may be used depending on the indication.
Although one of the most obvious benefits of CT simulation is the improvement in target delineation, the information contained in the 3D dataset also has provided tremendous insight into predictors of radiation pneumonitis (RP). Treatment planning software can calculate a dose–volume histogram (DVH), which describes the percent of lung that receives a certain dose, as well as additional statistics such as the mean, maximum, and minimum dose. For example, the percent volume of lung receiving more than 20 Gray (Gy) is called the V20, and similarly the percent volume of lung receiving more than 5 Gy is the V5. Although much work needs to be performed to better predict the risk of this severe complication – particularly in the molecular arena – rough predictors of RP have been devised, such as the V5, V20, and mean lung dose (MLD), and these help guide radiation oncologists as they determine the safety of a given radiotherapy plan.6,7
The salient benefit of CT-based planning is the improved delineation of the target. This process has been particularly advanced by the development of four-dimensional CT (4D-CT) simulation, in which the purported fourth “dimension” is time.8 In brief, during 4D-CT, an external marker is placed on the patient, which is ultimately used to mark the different phases of the respiratory cycle. The scan is performed multiple times at the same couch position, and thus several axial slices are obtained at a given position; these slices are reformatted according to the respiratory cycle at which they are taken, and a “movie” of the tumor during the respiratory cycle is visible. Multiple studies have confirmed that lower lobe primary masses and lymph nodes, particularly in the hilar and subcarinal stations, often move more than 1 cm in the superior to inferior direction.9,10 Without this patient-specific knowledge, margins around the tumor could be either too small or too large; in comparison, simulation with 4D-CT allows for the optimal margin and improves the therapeutic ratio.
Furthermore, the use of positron emission tomography (PET) and PET-CT has also improved the delineation of primary and lymph node targets. Although a more complete discussion of PET imaging is beyond the scope of this chapter, several studies have shown that PET and/or PET-CT aids the radiation oncologist in distinguishing primary tumor from atelectasis, and malignant from benign adenopathy.11 For example, DeRuysscher showed that PET-CT radiotherapy planning could increase the total radiation dose by over 20% and keep the toxicity risk the same by virtue of shrinking down the treatment volume based on the metabolic imaging.12 Similarly, RTOG 0515 prospectively recorded the treatment volumes contoured by the radiation oncologist on CT and PET-CT and found that the gross tumor volumes (GTV) were significantly smaller on PET-CT volumes, and the nodal volumes were changed in 51% of patients.13 Although not mandatory, the integration of PET-CT information with the CT simulation images has become a standard practice in treatment planning.
Once 3D information became available and treatment planning software developed the computing power and algorithms to calculate dose throughout the treatment volume, beam arrangements became more complex, and physicians were able to maximize dose to the tumor while reducing the dose to the organs-at-risk. Today, the majority of patients are treated with 3D-CRT, although intensity modulated radiation therapy (IMRT) has also been employed in the past several years.
IMRT introduced two new concepts in radiation planning and delivery. The first is termed “inverse planning,” in which the physician specifies the dose to tumor volumes and normal structures (e.g., esophagus, spinal cord), and the physicist instructs a computer algorithm to design a plan to meet those constraints. This process is distinctly different than traditional “forward planning,” in which the physician first creates the fields and the physicist or dosimetrist then determines the dose distribution. The benefit of the inverse planning algorithm is the ability to carve dose away from multiple normal structures while ensuring a radical dose to the planning target volume (PTV), and it is simply too difficult to accomplish this feat without a sophisticated cost function.
The second main component of IMRT is, as the name implies, intensity modulation. By definition, intensity is the total energy per unit area per unit time.14 In standard radiotherapy, the intensity across a given beam is basically constant; there is no variation across the field. If a particular beam delivers 100 cGy to a 10 × 10 cm2 area at a certain depth, that 100 cm2 region is essentially all receiving 100 cGy. In contrast, for a given IMRT field, the intensity throughout the field may vary substantially, a dosimetric feat which is typically accomplished through the use of multileaf collimators (MLC), narrow moveable leaves in the head of the linear accelerator.2 In IMRT, each radiation beam is split into multiple subfields, each with a different arrangement of the MLCs, such that the final delivered dose – the summation of each subfield – is highly variable across its area.
Planning studies have suggested that IMRT can reduce the volume of normal lung receiving radiotherapy, which may lead to a lower risk of pneumonitis and allow for dose-escalation.15 Moreover, by definition, IMRT is able to push dose away from important normal structures such as the spinal cord, brachial plexus, and esophagus. However, IMRT faces at least three fundamental obstacles for its routine use in thoracic radiotherapy: potential geographic miss due to tumor motion (i.e., “interplay effect”), increased spread of low-dose irradiation to the normal lung, and cost. With respect to the interaction of tumor motion and the treatment beam, consider that the open area of the beam is constantly changing throughout the treatment, and the tumor is constantly moving; thus, if the tumor is moving during respiration to an area where the MLCs are closed, no dose will be delivered for a period of time; that underdosing can build up over the course of treatment, and the tumor may theoretically receive a subtherapeutic dose.16
The second concern with IMRT is the increase in low-dose spillage in the lung. Although there are typically four to five beams in a standard 3D-CRT plan, IMRT may utilize seven to nine beams, and consequently there may be a higher V5 (i.e., percent of the lung receiving 5 Gy or more), which could lead to an increased risk of pneumonitis.17 Although some data refute the hypothesis that IMRT increases the pneumonitis rate and argue that IMRT decreases it, other reports have supported the notion of a higher rate of severe toxicity.18,19 The final issue with IMRT is cost, as the planning and delivery of the treatment are more complex, time-consuming, and thus expensive. Thus, whether a patient is better served by 3D-CRT of IMRT is patient-specific.
Despite these advances, in some situations, highly mobile tumors require large treatment ports and thus intolerable dose to the normal lung tissue. Although one solution to this dilemma is prescribing a lower dose of radiotherapy, this approach is clearly suboptimal. Thus, additional techniques have been developed to “gate” a mobile tumor; that is, turn on the irradiation only during certain parts of the respiratory cycle (e.g., end inspiration). There are three fundamental approaches to achieve this gating: attach the patient to a respirator-type device and only turn on the beam when a specified volume of air is inhaled (i.e., Active Breathing Control™), use external markers on the patient (or even surface anatomy) that acts as a surrogate for diaphragmatic motion, and only treat when that marker is at a position that represents a given phase of respiration, and insert an internal radio-opaque fiducial into the patient’s tumor that is visible on fluoroscopy and use that fiducial as a marker to verify the external marker.20–22 Whether or not these tools are implemented for a given patient is a function of tumor motion, risk of lung complications, patient tolerance of the gating method, and available technology.
A highly precise radiotherapy plan is ultimately useless if the patient is not reproducibly set up each day for the treatment. Patients typically receive permanent small tattoos at simulation, and these tattoos serve as markers for the radiation therapist to place the patient at the isocenter (i.e., the middle of the treatment position) for each treatment. However, skin marks are highly unreliable from a day-to-day standpoint, particularly as patients lose weight during treatment, and thus additional isocenter verification is important, especially as the radiotherapy plan becomes more complex. For many years, the megavoltage beam from the treatment machine itself was used as an imager (“port film”), and radiographs were taken at the isocenter at least one time per week. However, megavoltage radiographs have very poor resolution, and although they are an improvement over simple setup to skin, they still needed improvement.
Over the past 10 years, linear accelerators became equipped with kilovoltage imagers perpendicular to the head of the gantry. This arrangement allowed physicians to obtain diagnostic quality x-rays before each fraction of treatment without adding to the total radiation dose. As a consequence, daily setup was more precise. This concept of using higher-resolution, daily imaging was termed image-guided radiotherapy, or IGRT; in contrast to the acronym IMRT, which defines a very specific type of treatment planning and delivery, IGRT refers to the paradigm of using daily imaging to reduce the treatment margins (and even shrink them over the course of therapy).23 Image-guided radiotherapy is thus agnostic to the actual radiotherapy delivery technique.
Although daily orthogonal kilovoltage imaging was an improvement over daily megavoltage port films, the real advance in IGRT was the development of cone-beam computed tomography (CBCT), which is a CT scan performed on the linear accelerator, in the treatment position. In comparison to diagnostic CT scans which have multiple rows of detectors, CBCT uses the attached kilovoltage imager (or even the treatment beam as a megavoltage imager) to sweep around the patient, and the data are reformatted at the treatment console into a CT image. Although the quality of the image is far from diagnostic-level, it provides a remarkable improvement over orthogonal imaging, and the key structures (e.g., tumor, spinal cord) are easily visible; the accuracy of patient setup is therefore significantly improved, which allows for the possibility of decreased margins and either a higher total radiation dose or lower lung toxicity.
The concept of stereotactic radiation treatment was initially devised in the 1960s by neurosurgeon Lars Leksell and physicists Kurt Liden and Borje Larsson, who developed intracranial stereotactic radiosurgery (SRS).24 Despite minimal to nonexistent axial imaging, SRS was feasible because the highly precise stereotactic frame was screwed into the patient’s head, and thus the degree of accuracy of the stereotactic system translated to an equivalent level of accuracy of tumor localization.
However, prior to the very recent development of IGRT, obtaining the same level of setup accuracy in the extracranial body was much more difficult. One of the earliest reports of SBRT to the spine involved surgically fixating the vertebra to the stereotactic coordinate system, but this is not a viable solution for routine practice, and for almost any other body site, a frame cannot be screwed into the body.25 Thus, as opposed to intracranial SRS, there was no straightforward, highly accurate system of setting the patient up at isocenter. Moreover, the complexities of treatment planning mandated the use of 3D-CRT, as delivering 54 Gy in 3 fractions with SBRT rather than 60 Gy in 30 fractions has a higher risk of significant side effects.
By the 1990s, CT simulation and 3D-CRT became feasible, and extracranial stereotactic frames were designed that allowed for rigid immobilization and stereotactic localization without surgical intervention.26 Some centers also developed fluoroscopic systems that could image internal fiducial markers in lung tumors to ensure the treatment beam was turned on when the lesion was in the field.27 These highly precise setup devices finally allowed for high-dose, SBRT, (Fig. 85-1) which is typically delivered using multiple (8–12) noncoplanar beams that converge on the target, leading to a high dose in the tumor margin with sharp falloff dose into the normal lung. Some physicians now promote calling this technique SABR or stereotactic ablative radiotherapy.
Stereotactic body radiotherapy (SBRT) plan for a patient with stage I squamous cell carcinoma. This patient has a history of a liver transplant and did not have the performance status to tolerate surgical resection. He was treated with SBRT, to a total dose of 60 Gy in 5 fractions. The 5-fraction treatment was chosen to reduce skin toxicity. Panel A. The beam arrangement of 11 beams, including 5 noncoplanar beams, was chosen to reduce dose to the skin and chest wall. Panel B. Axial dose distribution. Each line represents tissue receiving a given dose. Notice the dramatic reduction in dose within centimeters of the target (red outline). Panel C. Coronal dose distribution. Panel D. Dose–volume histogram. Cross-hairs show the “V20,” volume of lung receiving 20 Gy or more (in this case, 3.8%).
Many technological advances have been developed since the original linear accelerators were used to treat thoracic SBRT. For example, treatment planning software has become remarkably more sophisticated, IGRT enables highly accurate setup without requiring an external coordinate system, and other systems such as respiratory gating have been introduced as well. Linear accelerators have been specifically designed for SBRT (e.g., CyberKnife™, Accuray, Sunnydale, CA; Novalis TX™, Varian Medical Systems, Palo Alto, CA), which include unique IGRT systems for improved setup and delivery accuracy.28 Although there are technical differences between these platforms, such as the typical need for internal gold fiducials for treatment with the CyberKnife, essentially any lesion that can be treated with one system can be treated by another; the key to a successful treatment is physician expertise rather than a particular linear accelerator.28
Medically Inoperable Stage I Lung Cancer
As detailed in other chapters, the standard-of-care in the management of stage I NSCLC is lobectomy and lymph node dissection. However, given the general medical compromise of this patient population, a nontrivial percentage of individuals are not candidates for this procedure. Although there is controversy surrounding the relative benefits of segmentectomy and wedge resection, until recently there was no viable nonsurgical alternative.
Indeed, until the mid-2000s, the standard approach for the treatment of medically inoperable stage I NSCLC was conventionally fractionated radiotherapy to 60 to 70 Gy in 30 to 35 fractions. Given the relatively small field, the treatment was tolerable, but it was also associated with unacceptably poor local control. For example, Bradley29 reported on the outcomes of patients at Washington University in St. Louis with medically inoperable stage I NSCLC treated with RT alone to a median dose of 70 Gy. Although patients with tumors 2 cm or smaller experienced a 2-year local control probability of 83%, the tumors between 2 and 3 cm had a local control probability of only 62%, and that fell to 50% for tumors between 3 and 5 cm. Such poor local control rates are not compatible with long-term survival in this population. Similarly, in the University of Michigan dose-escalation trial of node-negative patients, 10 out of 35 patients (29%) developed an in-field recurrence, despite a median total dose of 84 Gy.30 In a Cochrane review of medically inoperable NSCLC, local recurrence rates ranged from 6% to 70%, with most ranging between 40% and 60%.31 Nevertheless, it is clear that conventionally radiotherapy is inadequate for treating stage I disease, poor outcomes that are compounded by the inconvenient requirement of 6 to 8 weeks of daily treatment in these regimens.
The development of SBRT has significantly improved the local control and likely overall survival outcomes in patients with medically inoperable NSCLC, as the very high doses of radiotherapy are thought to overwhelm any underlying radioresistance of the tumor. Several of the earlier series come from Japan, where patients were initially treated using dosing schemes that were lower than the 54 to 60 Gy in three fractions as is typically done now. The outcomes, such as reported in 2004 by Onishi et al.32 were promising. In a series that totaled 245 patients, the total local progression probability was only 14.5%. There was a volume–response relationship, though, as the local failure probability for T1 tumors was 9.7% versus 20% for T2 malignancies. Although many other retrospective series showed comparable outcomes, only one phase I dose-escalation trial has been performed.
In this trial, investigators at University of Indiana escalated the total dose from 24 Gy in 3 fractions to 60 Gy in 3 fractions for T1 tumors with any dose-limiting toxicity, and the maximally tolerated dose for T2 tumors was 66 Gy; an unacceptable rate of lung toxicity was seen at 72 Gy.33 Of note, only 1 local failure was seen in the patients who received over 16 Gy per fraction. The investigators continued to the phase II component of the trial, which found continued excellent local control (95% at 2 years) but a risk of treatment-related mortality in six patients whose lesions were centrally located. Four of these patients died from pneumonia, one from a pericardial effusion, and one from hemoptysis in the context of recurrent tumor. Although the true etiology of these toxicities – that is, unique to SBRT treatment or a stochastic process – are debatable, the Indiana experience has defined the Radiation Therapy Oncology Group (RTOG) eligibility criteria for SBRT, which is at least 2 cm beyond the proximal bronchial tree.
On the heels of this single-institution study, RTOG 0236 was a multi-institutional, phase II study of SBRT for medically inoperable patients with peripheral, stage I NSCLC.34 The prescription dose was 60 Gy in 3 fractions, comparable to the Indiana experience. This trial showed that SBRT was feasible and highly efficacious in a multi-institutional setting, as the 3-year in-field, in-lobe, and locoregional control probabilities after treatment were 98%, 91%, and 87%, respectively. The 3-year overall survival was an impressive 56%, particularly notable given the underlying severe comorbidities in the cohort. It is important to note that the dose in this trial was 20 Gy × 3, but that this dose was calculated without “heterogeneity corrections,” which adjust for the air density of the lungs. As a consequence, the peripheral dose in RTOG 0236 was in effect 54 Gy, and thus 18 Gy × 3 has been essentially adopted as the standard regimen for peripheral lesions.35 (Table 85-1) displays reported prospective and notable retrospective trials using SBRT. The results have been so promising that a national trial has been activated comparing SBRT with sublobar resection in medically compromised patients (ACOSOG Z4099/RTOG 1021).