Walking the Walk…or Achieving Quality Is Harder Than It Seems




With the increasing awareness of quality gaps in health care has come a growing appreciation that this is a problem that needs to be confronted head on. Echocardiography has embraced this challenge, although, like the rest of medicine, we are not always sure how to proceed. Traditionally, quality has depended largely upon on individual expertise and vigilance, although quality management principles favor systems-based approaches. However, there are few prospectively designed, thoughtfully implemented, and proven systems approaches in echocardiography. Adoption of a systems-based approach would require a fundamental change in how we approach quality.


As a first step, the American College of Cardiology, the American Society of Echocardiography, and other organizations developed a taxonomy for approaching imaging quality that focused on the domains of laboratory structure, patient selection, image acquisition, image interpretation, results communication, and improved patient care. Quality goals were identified for each domain. However, converting these concepts into actual improvements in our everyday work is a complex task. Indeed, the translation of clinical evidence into improved population health (T4 translation) is becoming an important area of active research and national investment. Unfortunately, although we believe we know quality when we see it, proven successful strategies to improve quality in echocardiography are rare.


There are a number of places to start. Although quality goals in image acquisition and interpretation are perhaps obvious, results communication is also important, because the information derived from echocardiography can improve outcomes only if understood and valued by the referring clinicians who must incorporate it into care. Consensus goals for results communication included interpretability, clarity, definitiveness, completeness, and timeliness. In support of these and other aims, structured reporting has been hailed as a tool to improve work flow, improve report completeness and organization, aid report distribution and storage, and provide digital database and repository and electronic medical record interoperability. However, we do not know whether changing reporting structure and process improves quality and ultimately patient outcomes. And if so, what approaches work best?


In this issue of JASE , the report by Spencer et al . relates a single laboratory’s experience with a multiyear quality improvement project related to transthoracic echocardiographic reporting. The authors drew upon a strength of structured, or facilitated, reporting to develop a system of real-time error checking of completed reports, seeking to reduce errors, omissions, and inconsistencies by pointing these out to readers at the time of report finalization. The strengths of this undertaking are many, beginning with the careful prospective study of a real-world application, in nearly 8,000 echocardiographic reports, of a thoughtfully developed tool designed to address a specific quality issue. The approach was comprehensive: 580 “rules” were created, including 350 mutually exclusive or contradictory findings (errors) that were required to be resolved before reports could be finalized and 230 inconsistencies for which resolution was suggested but not mandatory. These combined for a total of 4,415 possible applications. The present tool shows substantial evolution and expansion from an earlier version, including a survey of readers documenting their impressions of the tool’s utility.


The authors documented substantial need for such a tool: 83% of reports had errors or inconsistencies, with an average of 0.7 ± 0.9 mandatory resolution errors and 1.7 ± 1.6 suggested resolution inconsistencies per report. Although I suspect I am not alone in believing (hoping?) that my echocardiographic reports only rarely contain errors and inconsistencies, few of us have measured this in an objective fashion, unlike the present report’s authors. They are therefore to be commended for their honesty and willingness to share their results in a public forum, rather than condemned for less than stellar quality. Because there is no reason to assume that readers in this laboratory are any different from those in other laboratories, these data provide an important wake-up call to all echocardiographers. Report accuracy and internal consistency is an area in which we cannot afford to fail. It is an area in which we must not assume that we are doing well simply because we have no easy or comprehensive way to assess how we are actually doing. Prospective data collection is a cornerstone of quality improvement—the oft-quoted statement regarding quality, that you cannot manage what you do not measure, rings true here. One of the major contributions of this report is to highlight this important principle and capture the spirit of quality as an unbiased and open process of improvement rather than a monolithic achieved goal.


Despite Spencer et al .’s detailed and comprehensive undertaking, and the impressive results, there are several concerns. Chief among these is that there was no objective measure of report quality. Although difficult to measure, the question of whether reports were actually improved is a central one. Although one might reasonably assume that removing contradictory findings and measurements would be an improvement, ideally one would be able to quantify this objectively to ensure that the effort actually has an impact and that there are no associated unintended consequences. For example, in a previous study, investigators evaluated radiology residents’ report quality before and after randomization to the use of structured reporting versus usual dictation and found a deterioration in the accuracy and completeness of reports in the cohort using structured reporting compared with increases in both parameters in the dictation group. Although it is generally assumed that structured reporting enhances report completeness, “improvement” can be in the eye of the beholder: another study of structured reporting implementation found that radiologists underestimated the benefit as measured by referring clinician satisfaction: 85% of referring clinicians felt that the use of templates improved radiology reports, whereas only 55% of radiologists did.


There are other ways in which an embedded reporting quality improvement tool might adversely affect outcomes. A hard-wired requirement for consistency could obscure nuance important to individual patient care or diminish the ability to practice the art of medicine. In Spencer et al .’s hands, errors involving measurements were more often corrected than descriptive statements without quantitation. However, it is possible that valuing seemingly “objective” information over subjective data introduces inaccuracies, especially because the acknowledged difficulties in echocardiographic reproducibility even under optimal clinical trial settings would suggest that, at times, an educated but qualitative “guesstimate” may be preferable to a measurement, which can imply an accuracy that is difficult to deliver in day-to-day clinical work.


Another concern with the present study is its performance at a single academic center, using a small and highly select group of five highly experienced readers. This may limit generalizability. As in any study, some design choices were made that may have affected the results, and it is interesting to speculate what a modified design might reveal. Although the tool did not evaluate free text, one might suspect that there may have been even more conflicts if it had, as such fields are often used to soften or modify the finding codes of computer drop-down menus. It is not clear if the results were returned to readers in any way, whether in aggregated fashion or individual scorecards with benchmarking against other readers. We are all data driven and seek to be at least as good as those around us; perhaps such feedback might improve reader performance. Also, as acknowledged by the authors, we don’t know whether there is utility in less experienced readers, or whether minor (or major) changes in rules or processes would improve compliance (e.g., would readers be more likely to respond to error presentation at the time of conflicting finding code selection rather than at the end of report completion, when motivation to return to report generation is low?). And perhaps most important, what is a reasonable frequency of errors and inconsistencies: is “none” even humanly possible or desirable?


The current report provides valuable, and fascinating, information about the echocardiographic reporting process and human responses to a quality improvement initiative, critical information for implementation science. Some findings were expected: there were more errors when the report reading rate increased. Others were unexpected: there were more errors earlier in the day than later, there was no relationship to reader experience (although all readers were experienced), and the error rate tended to increase over time. Together with an association between the number of conflicts and the likelihood of ignoring them, this suggests either some fatigue with the tool or an overall lack of acceptance. Indeed, 73% of suggested rules were ignored. Thus, reader satisfaction, or dissatisfaction, with the tool can be a barrier to its effectiveness. Although one might expect that effectiveness in finding outright errors (ranked by readers as 4.4 out of 5) should directly translate into making reports better, scores on this question averaged only 3.8. This discrepancy suggests that adoption of new reporting tools isn’t easy, a critical finding supported by others’ work. In a cohort study of the introduction of a checklist in radiology reporting, 85% of reports used the tool at an institution that required it (but oddly enough, not 100%), whereas only 9% of reports used the tool in a sister hospital in which use was voluntary. Report accuracy was similar in both hospitals. Clearly, accounting for the “human factor” is an essential part of any quality improvement initiative.


Although it is tempting to anticipate the widespread application of quality improvement reporting tools in the future, there are many potential barriers to broader use. The authors caution strongly against assuming that their tool could be a “plug-and-play” modification to any existing reporting system. Rather, they suggest that each laboratory needs to create and customize its own tool, a significant barrier for most. Even if the tool were more translatable, it is important to recognize that this is an early effort documenting the use of a tool constructed to suit an individual laboratory. The authors acknowledge the need to refine their tool, largely through simplifying it so that the tool becomes “less picky.” Specifically, they propose eliminating unused rules, as only 1,149 of 4,415 potential conflicts ever occurred; reducing frequently ignored rules; and perhaps even limiting alerts to true errors by removing rules related to inconsistency. This “less is more” approach highlights the importance of reader acceptance and is instructive for other efforts. Indeed many quality initiatives focus on simple interventions and “low-hanging fruit,” aiming for improvement rather than perfection.


There are also a number of potential applications of the present tool or those like it, which have not been touched on, but hold future promise. These include use as an educational tool and as a measurement tool to identify factors that increase errors (such as reading too fast), or “sloppy” readers (there was a range of 0.8–2.8 conflicts per report among only five readers). Careful study of tool implementation can help identify the unintended consequences that could arise including, for example, a trend toward briefer reports that generate fewer conflicts (incidentally saving time and increasing the generation of relative value units) but may provide less information and therefore be of lower quality.


Spencer et al . have provided us with a real-world example of a carefully constructed, and much needed, quality improvement tool for echocardiographic reporting. Their findings are simultaneously highly promising as well as sobering, as they cogently demonstrate that creating and implementing such a tool requires substantial resources and commitment that may be beyond most of us. The study is also instructive in identifying the difficulties inherent in quality improvement in general. These challenges are important and sustained yet must be mastered if we are to remain relevant in an increasingly quality-conscious health care future, with its growing reliance on electronic records, big data, public reporting, and value-based decision making. We have learned to talk the talk about quality, but can we, in echocardiography, walk the walk?


Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Apr 21, 2018 | Posted by in CARDIOLOGY | Comments Off on Walking the Walk…or Achieving Quality Is Harder Than It Seems

Full access? Get Clinical Tree

Get Clinical Tree app for offline access