When compared to international peers, the United States provides medical care at very high per capita cost. Yet despite its expense, the US healthcare delivery system does not reliably produce high quality outcomes. When viewed through the lens of value (V) —which may be thought of as the quotient of the quality of care (Q) and the resources (R) utilized to provide that care—if V = Q/, then the US healthcare system is underperforming. In response, rather than solely brokering healthcare transactions, the payer community has recently initiated steps aimed not only at reducing the cost of care but also at improving its quality. This new emphasis on the value rather than the volume of care is beginning to force healthcare organizations to take a hard look at their performance data.
From V = Q/R, it follows that to improve the value of care, a healthcare organization must improve Q , reduce R , or ideally, both. With respect to R , all healthcare organizations have systems in place to measure costs. (Whether these systems actually capture the real costs associated with a providing specific service is an important but separate matter). But how do healthcare organizations capture Q ? What is the accounting system equivalent for measuring and reporting on the quality of the care provided? The importance of having such systems in place must be underscored: absent a robust and widely accepted quality measurement system, achieving V devolves into an exercise in reducing R .
So how do we measure Q ? According to the Institute of Medicine, quality care comprises the following domains: (1) safety; (2) effectiveness; (3) efficiency; (4) timeliness; (5) patient-centeredness; and (6) equitable distribution. Of these, for most healthcare organizations safety and effectiveness are traditionally regarded as the measures of ‘quality’, with efficiency and timeliness considered ‘business operations’. It is only recently that patient-centeredness and healthcare equity have even entered into the quality conversation.
Let us assume we have the necessary infrastructure in place to ensure patient safety. If so, we then can begin to assess Q by focusing on the effectiveness of care as a surrogate for overall care quality. What should we measure? For any procedure, we should begin by asking whether that procedure—independent of the technical adroitness with which it is performed—was appropriate for the given clinical situation. Once we have established appropriate use, we then can turn our attention to outcome. Was the procedure a success? Were there complications? Over time, are these rates either better or worse than expected?
When presented with clinical situations where intervention and outcome are tightly linked, such as in performing CABG to treat coronary artery disease or catheter ablation for atrial fibrillation, the field of risk-adjusted outcomes measurement (while not without imperfections) provides a reliable means to adjudicate this performance. Difficulties arise, however, when we must assess diagnostic modalities, such as imaging. In contrast to interventional procedures, with diagnostic modalities the link between the study and its effect is indirect. For any given diagnostic modality, how accurate were we with our assessment? Did we miss important pathology—pathology that will ultimately present at a later point with greater clinical consequence—because our initial assessment was incomplete? Or instead did we make ‘overcalls’, leading to unnecessary downstream testing with the costs and potential complications as a result? Were all the data captured in our imaging evaluation, but then we failed either to synthesize it correctly or to report it completely? With all the steps we need to get right throughout the entire diagnostic ‘imaging chain’, it is easy to see why pinpointing the link between performance and outcome is exceedingly difficult.
Laboratory accreditation, which involves the external assessment of a laboratory’s performance relative to accepted standards, has been advanced as a means to assess imaging quality. For echocardiography, there is only one organization recognized as an Accrediting Organization by the Centers for Medicare & Medicaid Services: the Intersocietal Accreditation Commission (IAC). The IAC is a peer-led, not-for-profit, volunteer organization whose mission is to improve healthcare through accreditation. Like others in healthcare, the IAC has long recognized the challenges in assessing and demonstrating quality as it pertains to diagnostic imaging. Based on Donabedian’s pioneering framework for evaluating the quality of medical care, the IAC’s approach has been to develop standards to address each of the domains of his tripartite model—structure, process, and outcome. The IAC Standards tackle laboratory infrastructure (e.g., age of equipment; training and certification of staff), operational performance (e.g., comprehensive imaging protocols; the time allocated to perform the study), and diagnostic yield (e.g., correlation of imaging results with other modalities and/or clinical outcome; quality improvement activities). The assumption: by meeting peer-authored standards for imaging practice, accredited laboratories will not only improve the comprehensiveness, accuracy, and reliability—in essence, the quality—of their diagnostic work, but they will also sustain that improvement over time.
Does IAC accreditation indeed achieve this end? Participation in an IAC accreditation program takes time and costs money. In today’s value-conscious era, are the resources that an organization commits to IAC accreditation justifiable for the improvement in quality that will result? This is not only a crucial question for the IAC, but also one for the imaging community at large. And if not through accreditation, how are we to demonstrate, both inside our complex healthcare organizations (where access to resources is often challenging) and outside—to our patients and payers—that the training of our teams, the capabilities of the equipment they use, the completeness and technical competency of the studies they perform, the sophistication of the interpretations they provide, and the review work they do together to ensure consistent performance—that all of these proxies for quality, when achieved in concert, do in fact result in better patient outcomes?
To address this, the IAC has embarked on a program of research aimed to demonstrate the value of accreditation. Initial efforts utilizing IAC databases have generated insights into practice patterns and the value of technical certification. While meaningful contributions to be sure, such studies do not answer the question of the effectiveness of the accreditation process per se in contributing to better patient outcomes. IAC accreditation is perceived to be of value, but what do we really know?
In this issue of JASE, Behera et al help us begin to answer this question. Given the aforementioned difficulties in demonstrating that “accreditation = quality”, the authors developed an ingenious approach:
- •
What was analyzed: Consecutive pediatric echocardiographic studies performed on patients who underwent interventional or surgical procedures for congenital heart disease during a 30-month period at California Pacific Medical Center (CPMC, a community hospital). This timeframe extended to a point 1 year prior to when CPMC sought IAC accreditation, and these echo studies served as a pre-intervention data set. Similar studies performed during the 2 years after CPMC achieved IAC accreditation served as a post-intervention data set. Studies in each data set were matched for patient age and diagnosis to studies performed over the same timeframe at the Lucile Packard Children’s Hospital (LPCH, an academic referral children’s hospital). LPCH had achieved IAC accreditation prior to and maintained it throughout the study period. The LPCH studies served as a reference data set.
- •
How the analysis was performed: Echo studies were analyzed by two independent reviewers for image quality and study comprehensiveness. A third reviewer analyzed the echo reports and medical charts for report completeness and diagnostic accuracy. The metrics for assessing image quality, study comprehensiveness, and diagnostic accuracy were developed by the American College of Cardiology Adult Congenital Pediatric Cardiology Quality Metrics Working Group Initiative. The metrics for assessing report completeness were developed by the IAC and have been approved by the American Board of Pediatrics as an MOC Part 4 quality improvement activity.
- •
What was compared:
- ○
Echo studies performed at CPMC pre-accreditation were compared to both (1) CPMC post-accreditation studies and (2) LPCH studies from the same time period.
- ○
Echo studies performed at CPMC post-accreditation were compared to LPCH studies from the same time period.
- ○
As a form of control for changes in practice over time, LPCH studies from the early time period were compared to the later studies.
- ○
Behera and colleagues found that when compared to studies performed prior to IAC accreditation, studies performed at CPMC after achieving IAC accreditation were significantly more comprehensive in image acquisition and complete in reporting findings. They also found that LPCH, which had maintained continuous IAC accreditation throughout the study period, not only sustained its performance but also scored significantly higher than CPMC in both eras for both image quality and study comprehensiveness. Report completeness improved significantly post-accreditation at CPMC and was excellent at LPMC throughout the entire study period.
What about clinical outcome? As determined by the congruence of echo findings with the interventional or surgical reports, there was no change in the rate of diagnostic accuracy at CPMC pre versus post IAC accreditation. The rates of diagnostic accuracy at the continuously accredited LPCH mirrored rates at CPMC. Moreover, when comparing the early LPCH studies to the later LPCH cohort, an increase in the diagnostic error rate was seen over time.
So how are we to interpret the results of Behera et al ? Does IAC accreditation merely result in greater discipline in scanning and reporting without any real difference in clinical outcome? Before assuming this, several study limitations must be acknowledged. First, note that this was a study involving pediatric echocardiography. In >25% of those studies with diagnostic errors, patients were unable to complete a comprehensive echocardiographic study. As the authors point out, both CPMC and LPCH utilize sedation in <5% of their patients, and patient agitation is well linked to decreased diagnostic effectiveness. Second, a number of the diagnostic errors were linked to known limitations of echocardiographic imaging. Examples of such errors include failing to identify branch pulmonary artery stenosis, left coronary artery compression, or the presence of a patent foramen ovale. (Importantly, on reviewing the error table in the manuscript, it appears that more complete imaging did not led to ‘overcalls’ in pathologic findings.) Third, we must recognize that this study population was highly selected. Only patients who went on to interventional or surgical procedures were included. We do not know the rates of diagnostic accuracy for those patients undergoing echocardiographic studies but not proceeding to intervention. This is the overwhelming majority of patients who will pass through any lab.
This last point highlights the difficulty in measuring the utility of quality efforts in diagnostic imaging. While we could posit that the post-IAC accreditation increase in the comprehensiveness of imaging and completeness of reporting would lead to greater detection and recognition of clinically significant pathology, this is conjecture. Decision-making in patient care involves multiple inputs; the thoroughness, accuracy, and precision of our imaging data are central but not exclusive in driving patient outcome. Unless we are able to track outcomes of a significant number of patients undergoing diagnostic imaging studies over a sufficient period of time—patients with both positive and negative findings from the study—and then link their outcomes back to the diagnostic study, we cannot truly assess the diagnostic accuracy of the study. Without this, we cannot determine which of the particular processes or protocols or programs that we put in place as part of our quality efforts were indeed effective in improving outcomes. In short, we cannot assess the quality of our quality programs.
Looking beyond accreditation efforts, clinical data registries hold promise for closing this information gap. The soon-to-be launched ImageGuideEcho Registry, ASE’s planned Qualified Clinical Data Registry, is intended to provide the means to connect imaging test results and ultimate clinical outcomes. Coupling these data with laboratory program data attained through accreditation, we will be better positioned to measure and continuously improve the quality of our imaging efforts.
Determining the value of accreditation or any other quality improvement activity pertaining to diagnostic imaging remains challenging. Yet given the realities of today’s healthcare environment, this has never been a more necessary pursuit. A complete answer will not arrive from one study. It will take additional thoughtful studies like the one performed by Behera et al to extend our understanding, each examining our quality improvement efforts from a slightly different perspective. With each added study, as another tile to a mosaic, the picture will begin to take shape. Only then we will be able to discuss quality with the same rigor that we discuss cost, and only then we will be able to assess the true value of the care we provide.
Disclosures: Dr. Rose is a former Chair of the Intersocietal Accreditation Commission. Dr. Johnson reports no relevant disclosures.