Going beyond Right and Wrong: Building the Framework for Quality Improvement in Congenital Echocardiography—You Can’t Manage What You Don’t Measure




At an early morning hour at the start of a busy work day, a dozen individuals position themselves at conference room tables in front of a projector and screen. After the lights dim, a series of scenarios are presented, each one selected to review a diagnostic error. In the ensuing discussion, the group attempts to identify root causes. For those who had been directly involved in the error, personal feelings are set aside with the recognition that a dispassionate evaluation is the best means for identification and prevention of future errors. A nonblameful conference environment allows free discussion, leading to the formulation of an action plan. This type of conference likely takes place in many industries, whether it be manufacturing, aerospace, or health care. In clinical cardiology, the same kind of multidisciplinary approach is increasingly being used to assess and address diagnostic errors in echocardiography. The goal is to improve patient care.


In this issue of JASE , Benavidez et al . strive to better define the sources of diagnostic errors in congenital echocardiography, while at the same time inserting examples and suggesting strategies to mitigate error risks. Their work highlights and acknowledges a new focus in 21st-century medicine. In clinical echocardiography, we may know how to do it. The next questions form the crux of Benavidez et al .’s present work: How well are we doing it? How do we manage error risks if we don’t measure them? How do we integrate these important goals within the quality domain, recognizing that additional resources will be required for effective implementation?


Like health care itself, the concept of quality is evolving. The Institute of Medicine, an arm of the National Academy of Sciences, lists six quality dimensions: safety, effectiveness, timeliness, equity, efficiency, and patient-centeredness. We hear of “quality assurance” activities, implying that achievement of a preset standard represents “quality.” Once achieved, the process need go no further. Those words have been frequently replaced by the phrase “quality improvement” or the more explicit “continuous quality improvement.” The evolving phrase becomes its own illustration that just like any biologic system, a standard, an established process, or a condition is ephemeral. Ensuring quality demands meticulous attention and ongoing awareness of situations and resources. Success requires both the realization that however good a quality assurance process or echocardiogram may seem to be, it can always be better, and that the review process be free of blame or retaliation.


Quality improvement initiatives involve not just the echocardiography laboratory but also extend broadly throughout other arenas, such as health care administration and physician and sonographer credentialing. In 2000, the American Board of Medical Specialties, composed of 24 different specialty boards, including pediatrics and internal medicine, chose to shift the idea of physician credentialing from recertification to the trademarked “maintenance of certification.” Even in the face of controversy, maintenance of certification has become an expected activity for those wishing to remain board certified. For both pediatrics and internal medicine, ongoing certification is now predicated on successfully completing activities that are considered to be important in quality improvement. More recently, the self-rebranded Intersocietal Accreditation Commission, formerly called the Intersocietal Commission for the Accreditation of Echocardiography Laboratories, created guidelines to incorporate quality improvement measures in echocardiography laboratories. To come full circle, at the time of this writing, the commission offered its own program for pediatric cardiologists to accrue maintenance-of-certification credit. The mission and directives of creating quality improvement metrics are taking form, but questions remain. What metrics do we measure, and how do we measure them? Cardiologists have the opportunity to have a direct impact on what parameters should be tracked and assessed, while at the same time receiving credit toward certification.


Benavidez et al . build on their previous work, expanding the time frame and increasing the patient cohort size to tease out additional risks for the development of diagnostic error. In one pattern, multivariate analysis identified patient weight < 19 kg to be a risk factor, with an adjusted odds ratio ranging from 2.2 to 3.5 for weights < 5 kg. On one hand, one could have expected that lower weight should have translated to better imaging conditions. Shorter distances to cardiac structures coupled with higher frequency transducers should theoretically lead to a higher rate of diagnostic accuracy than in larger patients. As mentioned, however, overriding conditions may lead to error, such as changing physiology in the very young or increased complexity of disease. Not mentioned, and possibly a more common explanation for the increased risk, is patient movement. A familiar example to the pediatric echocardiographer: a 2-year-old patient with previously undiagnosed complex structural heart disease requires an echocardiographic assessment. He or she is clearly wary of care providers. Seemingly innocuous items such as a misplaced pacifier or a transiently absent parent can tip a toddler’s mood, turning a potentially useful echocardiographic assessment into a nearly nondiagnostic evaluation.


Other aspects in the article are worth noting. Seventy-eight percent of errors were considered to be preventable, and 73% of these were considered to be of moderate or greater importance. Error risks were higher in echocardiographic studies performed after hours and on weekends, compared with studies performed on a typical work weekday. If we assume that equipment factors, cardiologist factors, sonographer training, and experiences were similar throughout the week, one likely conclusion we can derive is the presence of an additional individual(s) associated with these studies. A second set of eyes during the study, or during an urgent reading of the echocardiogram, is likely beneficial. It is also reasonable to infer that critical care unit and after-hours and weekend examinations may involve greater acuity, urgency, and complexity; the clinical impact of the findings, and the consequence of a potential error, is augmented.


While creating a framework to evaluate diagnostic error, we are simultaneously identifying challenges. There is an inherent subjectivity of rating and categorizing both the source and the severity of clinical errors. Three general factors contributed to 86% of the 254 errors: cognitive (94 of 254 cases [37%]), technical (71 cases [28%]), and procedural or conditional (53 cases [21%]). Within these general factors, the three most frequent contributors were underinterpretation of a finding in 53 cases (21%), poor acoustic windows in 37 (15%), and incomplete examinations in 35 (14%). Together, these contributors accounted for nearly 50% of the documented errors in the 6-year study. Just how each of these factors plays out in individual scenarios and institutions will vary of course, but they are deserving of attention.


Risk factors, once identified, can become targets for intervention and incremental improvement. For an ideal and effective intervention, however, we are presuming that the target is the primary contributor. We hope and expect that the target would not be a surrogate for an unappreciated factor. When categorizing diagnostic errors in the report, an error is associated 1:1 with a contributor, such as cognitive. Although this association simplifies metrics from a quality improvement framework, it is more likely that a combination of events led to the error. This is underscored by the discomfiting observation that 45% of errors were classified as “possibly preventable.” Given the totality of circumstances, which factor do we choose for our measurement, much less our intervention? Perhaps a combination of patient complexity and very rare diagnosis led to a meaningful, preventable error. If this is categorized as misidentification, then such a cognitive error can be reduced with ongoing education, which is an already established tenet. Acoustic windows, however, are intrinsic to the patient and may be among the 22% of errors considered to be nonpreventable. Revisiting our hypothetical 2-year-old, which factor do we consider the principal target if the child has a rare, complex diagnosis, is screaming, and has poor acoustic windows? If the toddler is sedated, can error risks be mitigated? Or could a combination of high complexity and a very rare diagnosis supersede the benefits of sedation? If the study had been deferred to a regular workday, could the error risk have been reduced? These nuances may be addressed only by digging deeper and looking at greater quantities of data, recognizing that we may not be able to pinpoint a root cause for an individual case. Finally, quality improvement assessments would ideally be generalizable to other institutions, but variations in practice, training, and personnel, coupled with the challenges of minimizing subjectivity, may make intercenter comparisons difficult at best.


Even with these challenges, interwoven in the current JASE article and in their previous 2008 publication, Benavidez et al . give us examples of, and suggestions about, what a continuous quality improvement project could look like. In the prior study, echocardiographic studies performed in the recovery room were at statistically significantly increased risk for diagnostic error, with an adjusted odds ratio of 7.9. This finding led to a subsequent policy change in which the supervising or interpreting physician reviewed the images before the patient’s leaving the examination area. This intervention led to a reduction in errors. As there was a single reader in this model, similar to echocardiographic studies performed in other settings, there is a suggestion that the presence or availability of a second observer in the examination setting may have contributed to a reduction in errors. Just as there are occasional challenges when determining error risks, we may find it equally challenging to gauge our successes; whether the presence of a second observer, assurance of study completeness, or other factors led to quality improvement is unclear. Many centers lack the staffing to provide on-site physician presence during echocardiographic studies, though many do so for after-hours and weekend studies. Alternative means of increasing bedside, real-time physician input and “presence” may include the use of telemedicine links or other applications.


While we focus on errors, we should also recognize their rarity. Of 147,000 echocardiographic examinations performed during the study period, the rate of diagnostic echocardiographic errors was 0.18%. Not stated is that the nonerror rate was 99.82%. One of the infrequent risk factors, “communication or information errors,” represented only three of the 254 cases. One can argue that a tightly closed-loop circle of communication between the ordering clinician and the echocardiography team can lead to the best outcome. The more clearly the patient’s caregiver can articulate the issues of concern, the more likely the study is to address these issues.


Although not within the scope of Benavidez et al .’s report, there are other realms within echocardiography that should undergo the same kind of rigorous evaluations. This could include acquired disease. Training, and its validation of physicians and sonographers, and errors within specific clinical units such as medicine, surgery, and intensive care, can also be assessed. For every question we attempt to answer, there will be additional ones to triage and eventually address. How do we evaluate what we are doing? Can we measure our performance in ways that minimize subjectivity? How do we measure our measurements? What countermeasures can be put in place? Will the health care community accept the cost/benefit ratios that will undoubtedly evolve over time? Can we more visibly acknowledge that the expectation to create and formalize quality projects will require resources? Are we prepared to accept the trade-offs: that other quality metrics such as efficiency will be altered? Can we standardize assessments so that they can be applied and compared across institutions?


Even as we examine diagnostic accuracy with a critical eye, echocardiography remains a cornerstone of clinical cardiology. It is a safe, noninvasive, and effective diagnostic test. It complements the physical examination, and it can be a literally lifesaving evaluation. The strength of independent confirmation of clinical suspicions, the value of quantitative assessment, the occasional revelation of unexpected findings, and the use of echocardiography to replace additional tests having attendant risk and additional cost define its current role in clinical evaluation and management. To maintain that critical role, we should continue striving to measure, evaluate, and work to improve diagnostic accuracy. By directing our efforts toward quality improvement processes, we will continue to foster a modern-day environment that will benefit our patients.


Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

May 31, 2018 | Posted by in CARDIOLOGY | Comments Off on Going beyond Right and Wrong: Building the Framework for Quality Improvement in Congenital Echocardiography—You Can’t Manage What You Don’t Measure

Full access? Get Clinical Tree

Get Clinical Tree app for offline access