In 1982, one of America’s “big three” automotive companies launched the marketing slogan “Quality is job one,” which really became its mission statement—and yes, there is a difference between a marketing slogan and a mission statement. Over the past 10 years, the concept of continuous quality improvement has dramatically improved and influenced the practice of medicine in the United States, and the issue of quality and its performance strikes at the very heart of medicine today. Idealistically, the goal of health care, whether practiced by a nursing assistant, a physician’s assistant, an allied health professional, or a physician, should, first and foremost, be to provide the highest quality compassionate care to each and every patient. But make no mistake about it: medicine is a business, and hospitals, practices, and even academic centers look at the “bottom line.” We certainly do live in a time when focus on cost in medicine, coupled with decreasing reimbursement, is putting an incredible strain on our ability to practice medicine. The pressure to “do more with less” has permeated medicine in general, and it certainly applies to medical imaging and specifically to echocardiography. Although the rates of growth may have slowed, most echocardiography laboratories are well aware of the declining reimbursement for each study and the pressure to do more studies more efficiently. I would argue, however, that high quality should always be our number one goal in providing services (including echocardiographic services) to our patients and their caregivers and the goal of our profession (including the American Society of Echocardiography, the European Association of Echocardiography, and similar professional organizations).
In a recent guideline document developed jointly by the American Society of Echocardiography and the European Association of Echocardiography, Nagueh et al. stated that “The assessment of left ventricular (LV) diastolic function should be an integral part of a routine examination, particularly in patients presenting with dyspnea or heart failure.” Nonetheless, analysis of diastolic function and reporting of findings is not always an “integral part” of echocardiography examinations. In the current issue of JASE , Johnson et al. describe their experience with a quality improvement program they implemented in an extremely busy, multisite, Intersocietal Commission for the Accreditation of Echocardiography Laboratories–accredited laboratory at the Sanger Heart and Vascular Institute in Charlotte, North Carolina. Their study began with an important observation: echocardiographic reports often included only limited comments about diastolic function, or no comments at all. The investigators recognized that in their large practice, there was no uniform policy or standard protocol for evaluating diastolic function. They set about making improvements by designing an approach that involved a series of steps: (1) adapting published literature to outline a system for categorizing diastolic function, using four diastology variables (left ventricular inflow velocities, annular tissue Doppler velocities, left ventricular inflow propagation velocity, and indexed left atrial volume; (2) surveying studies at baseline to determine if diastolic function was assessed and measuring how well the reported findings agreed with the independent assessment by an expert; (3) developing interventions designed to improve quality (this involved a series of steps that is outlined clearly in their report and discussed below in more detail); and (4) performing a second survey to evaluate performance improvement.
What can we learn from the study by Johnson and colleagues?
First, they highlight that even though there are published guideline documents, the mere existence of an evidence-based guideline does not, in and of itself, guarantee that quality and excellence in clinical behavior will automatically follow the reading and/or acceptance of the guideline document.
Second, they show us a method by which a multifaceted approach to quality improvement can be carried out in a high-volume clinical practice in a busy clinical environment. It is important to stress that this is an interactive, nonjudgmental process, whereby all can learn and all can benefit. These investigators developed a process that provided education: case-based education, a support system for ongoing learning and feedback, technological support systems in the form of redesigning and prompting interpretive physicians to not only do a thorough exam of left ventricular diastolic function but also to fully summarize the diagnostic criteria in their reports, and, finally, ongoing peer review audits giving individual feedback to interpreting physicians and cardiac sonographers. This process led to a dramatic improvement in the diagnostic accuracy of performance and interpretation of their echocardiographic studies.
As a means to enhance quality, Johnson et al. used a four-point approach. The first step focused on education: at Sanger Heart and Vascular Institute, monthly echocardiographic conferences (which were teleconferences, because the clinic is spread out over a geographic area), sonographers and physicians were educated on both the completeness and the accuracy of acquiring studies evaluating left ventricular diastolic function and the completeness and accuracy of interpretation. Again, this was performed through reviews of case-based studies that were nonjudgmental but were very informative. An archive of these teaching sessions was also provided on the internal Web site, so that physicians and sonographers who might not have been able to attend a session could review this, as well as several PowerPoint presentations that were posted online, further reviewing parameters for diastolic assessment and criteria for proper interpretation of diastolic function.
The second step involved developing revised protocols for acquiring the proper measurements of left ventricular diastolic function and reporting, which were given to all sonographers and echocardiographers via e-mail, while redesigning parameters in their echocardiographic reporting system to remind physicians of normal and abnormal values for the measurements, as well as including phrases that could be used in diagnosing and evaluating left ventricular diastolic function.
Step 3 was carried out after a 3-month period and involved what is known as a post–educational intervention set, whereby studies were reviewed for their technical completeness and accuracy, as well as diagnostic interpretive accuracy, and each sonographer and reviewing echocardiographer received a report card that benchmarked his or her performance, again in a nonjudgmental fashion.
The fourth step was really based on the results of step 3, whereby case studies and interactive review sessions were again held at the monthly echocardiographic teleconference, at which common issues that might have interfered with the proper technical assessment or professional interpretation were reviewed and consensus was reached.
Three months after cycling through this process, a final evaluation was made to track improvements in image acquisition and diastolic function interpretation, and as mentioned above, the results were fairly dramatic. As reported in their study, technical accuracy and interpretative conclusions improved substantially. Overall, correct interpretations increased significantly, while significant decreases in the percentage of cases interpreted incorrectly, or in which diastolic function was not evaluated, were also noted. Johnson et al. not only point out a method for improving the quality of their lab processes and reporting on a complex subject, assessment of left ventricular diastolic function; they also show that their process led to dramatic improvement in the performance and interpretation of echocardiographic studies that evaluate diastolic function. What is remarkable about this study is that the investigators represent a very busy clinical laboratory with multiple geographic sites, multiple sonographers, and multiple interpreting physicians. One could argue that this private practice is a prototypical example of a well-run practice with high volume and, most likely, a good bottom line. In this “real world” practice setting, a group of busy practicing clinicians and their sonographers were able to successfully design and complete a thoughtful study and quality improvement exercise that not only dramatically improved the technical approach to, and results of, their evaluation of diastolic function, but also improved the quality of care to their patients and the quality of services that they provide to their referring colleagues. I will say that this provides a shining example for our profession.
Johnson et al. highlight the fact that quality improvement is an ongoing process—not a judgmental retrospective process—that involves not only education, on the basis of analysis of areas that need improvement, but also the development of educational tools that can be placed online, in PowerPoint presentations, and even incorporated in reporting systems that allow for an active and supportive approach to actually improving the process (in this case, diastolic function analysis) that has been identified. Clearly, it is not enough to critique the performance of an echocardiographic study or silently criticize our colleagues and their interpretations. Instead, it is more effective if, in a supportive manner, improvements can be made, both at the front end of data acquisition and at the back end of data interpretation, improvements in quality that will lead to better care for our patients, more respect from our colleagues and, again, place echocardiography in a better light as a valuable diagnostic and management skill. Johnson et al. show that quality improvement can be carried out in a large, geographically dispersed clinical echocardiography lab that involves multiple providers, even though this environment might differ substantially from an academic echocardiography lab, where there is a defined echocardiography leader who might more easily set priorities for improvement.
In that regard, a recent report by Johri et al. , from the prestigious Massachusetts General Hospital Cardiac Ultrasound Laboratory in Boston, shows that a similar quality improvement process can make a dramatic impact in reducing interobserver variability in left ventricular ejection fraction assessment. Left ventricular ejection fraction may be one of the most widely used numbers in cardiology and is included in many guidelines and many clinical decisions. In fact, in the guidelines published by the American College of Cardiology and the American Heart Association on the assessment of valvular heart disease, specifically looking at aortic stenosis, ejection fraction is the only index of ventricular function quoted, even though many of us know that the measurement of left ventricular ejection fraction, which is a load-dependent variable, is highly dependent on the expertise of the sonographer, the skill of the interpreting physician, and image quality. At the Massachusetts General Hospital echocardiography lab, highly experienced physicians and sonographers undertook a process to decrease interobserver variability in how left ventricular ejection fraction was reported and, thereby, to improve the quality of their echocardiographic reports. Their process was very similar to that undertaken by Johnson et al. Johri et al. had a teaching intervention spread out over 6 months, whereby an initial 1-hour session was centered around a “preintervention” identification of the problem of variability of left ventricular ejection fraction, on the basis of 14 cases. Then, over a period of 3 months, further teaching sessions focused on improvement in the proper acquisition and interpretation of studies, and cases for self-directed review were also provided. Finally, 3 months after this series of educational teaching activities, assessment of improvement in left ventricular ejection fraction interpretation was based on further cases being shown, followed by feedback in group discussion, so that consensus on methods to obtain and interpret left ventricular ejection fraction was reached. Like Johnson et al. , Johri et al. showed dramatic improvement in their assigned goal.
Because I first began performing echocardiographic studies in 1975, I have some perspective on which to base observations. First and foremost is my continued belief that echocardiography truly is the heart of clinical cardiology. If it is performed well, to the highest quality, and interpreted in a similar fashion, it can provide dramatic diagnostic and management information and, thereby, serve our patients, our colleagues, and our profession. But having been in this business for many years, I am continually struck by the criticism from our colleagues that measurements, such as left ventricular ejection fraction, differ substantially in the same patient, depending on different interpreters and different studies (granted, there can be changes in clinical conditions that can account for this), while measurements of mitral regurgitation severity may vary dramatically from study to study. The current report by Johnson et al. highlights a process whereby an approach to something as complicated as the evaluation of diastolic function could easily be applied to other parameters that are commonly measured in echocardiographic labs, such as the evaluation of valvular stenosis and regurgitation, analysis of left ventricular systolic function, and so forth. The report from Massachusetts General Hospital highlights a similar process, whereby dramatic improvements in reporting left ventricular ejection fraction were made. The fact that both of these studies show that tremendous improvement in quality can be carried out in busy clinical laboratories, be they private practice or academic centers, highlights the reality that we can, and must, do this for our patients and profession. We must define important quality measures to be evaluated (be they left ventricular ejection fraction, mitral regurgitation grading, diastolic left ventricular function analysis, etc) and then, through a case-based approach, in a nonjudgmental but educational environment, provide not only the right forum for learning, but also specific tools to make sure that the needed corrections are made, so that all learn and none feel judged. By providing a mechanism for feedback and for ongoing reanalysis, improvement in the quality of the echocardiographic exam (from the perspective of both technical performance and diagnostic interpretation) will almost certainly be reached. And in doing so, we will undoubtedly provide higher quality care for our patients and, thereby, provide better service to our colleagues and more respect for our profession.