The process of performing an echocardiographic study is much more complicated than just recording the images and interpreting them. It starts with a request for an echocardiogram, based on the clinical issues of concern as well as the caregiver’s understanding of how well echocardiography might address the relevant questions and how the results might affect the patient’s subsequent treatment and clinical course.
Echocardiographers and cardiac sonographers are most familiar with the next step, which begins when the patient arrives in the echocardiography laboratory, or when the echocardiographic machine arrives at the point of care (intensive care unit, operating room, emergency room, cardiac catheterization lab, or many other service sites), where echocardiographic studies are actually performed. The “echocardiography team” is responsible for performing the right study in the right manner, obtaining the right information, and answering the important questions. The interpreting physician must find convincing recorded evidence to come to the right conclusions, to interpret the study findings accurately, and to address the clinical questions of concern. These are key issues, and if the right information is not obtained, the patient may not achieve the maximum benefit from the diagnostic evaluation.
The final step in the process is no less important. This step involves preparing and delivering a final report of the study findings and their implications to the patient’s caregiver. Even if the diagnostic evaluation was optimal, and if good quality images were recorded, and if the interpreting physician recognized the relevant findings correctly, unless the needed information reaches the caregiver in a timely and cogent manner, the diagnostic evaluation may not improve the patient’s care. In other words, the study report is just as important as the diagnostic study itself; it is the interface that helps translate study findings into appropriate patient care.
As the discipline of echocardiography has evolved over the years, so have reporting methods. Many of us remember dictating reports of study findings and conclusions that were then transcribed into a final report that might have been carried to a hospital ward and placed in the patient’s chart, or mailed to the office of the requesting physician, or even transmitted by using a “facsimile” (fax) machine! We realize that in many practices, reports are still dictated and transcribed. In such a setting, it is possible that the final transcribed report includes statements that do not make sense or are erroneous. As an example, imagine that the interpreting physician, after reviewing an echocardiogram from a patient with prior myocardial infarctions and heart failure symptoms, dictates the following statements: “Left ventricular function is depressed, with ejection fraction 25%. Mural thrombus is not seen.” Inadvertently, however, the transcriptionist types “Left ventricular function is depressed, with ejection fraction 25%. Mural thrombus is seen.” Such a report might lead to further studies or treatments that might be unnecessary and inappropriate. To recognize and correct such inadvertent mistakes, the busy echocardiographer would need to proof read each report with great care. Even then, it is sometimes hard to catch minor discrepancies that might have unintended consequences.
The advent of digital imaging has been accompanied by digital reporting systems, and many busy laboratories now use such systems to generate final reports and to send these, electronically, to the requesting caregiver. This approach has a number of advantages: reports can be generated and finalized much more rapidly, and once authenticated, the final report can be transferred to the electronic medical record, where it can be accessed immediately by the patient’s caregiver and any other health care professionals who are involved in the patient’s care. Of course, there are potential downsides: a report that contains inadvertent errors or inconsistencies will be delivered immediately to the electronic medical record and be easily accessed by the patient’s caregiver and others who are involved in the patient’s care. It is possible to imagine a number of reasons for mistakes in reports, including failure to record the right data, failure to interpret the data correctly, failure to prepare the report correctly, and failure to identify discrepancies or typographic errors. Regardless of the reason, once an erroneous or confusing report is sent to the requesting caregiver, the “genie is out of the bottle.” We believe that most of the time, reports are accurate and helpful for patient care, but we do understand that sometimes reports do include mistakes, usually inadvertent and unintended, and that these can affect the value of the echocardiographic study to the patient.
In this issue of JASE , Chandra et al describe a very interesting study in which they took advantage of the construction of digital systems for report generation to investigate the frequency and nature of discrepancies in reports. The authors capitalized on the fact that in most “facilitated reporting” systems, final reports are prepared by selecting a number of “finding codes” that then automatically generate diagnostic statements in the final report, describing the study findings and conclusions in coherent English. As a generic example, let us imagine a finding code listed as “moderate MR.” When this code is selected, the following statement appears in the report: “Moderate mitral regurgitation is present.” Unlike dictated reports, where the interpreting physician can dictate literally an unlimited variety of statements, a “facilitated report” is constructed from a number of finding codes, which can be captured and analyzed. Chandra et al reviewed the finding codes in a single, commercially available facilitated reporting system to identify pairs of codes that ought to be considered “contradictory”; for example, a finding code that generated the statement “No left ventricular thrombus is present” in the findings section of a report would contradict a second code that generated “Left ventricular mural thrombus is seen” in the conclusions section. Other “code pairs” might not be mutually exclusive but would generally be considered “inconsistent.” For example, it would be unusual for a report to include finding codes that indicate that “severe aortic regurgitation is present” and that “left ventricular size is normal,” because most patients with severe aortic regurgitation also have accompanying left ventricular enlargement. It would be possible, however, for a patient to have both severe aortic regurgitation and a nondilated left ventricle in the setting of acute severe aortic regurgitation caused by fulminant endocarditis or traumatic rupture of an aortic valve cusp, or when the patient had a concomitant condition such as pericardial constriction or tamponade that restricted the size of the left ventricle.
Chandra et al investigated >96,000 reports, generated over an 11-year period, and discovered that contradictory findings were present in 4% of transthoracic echocardiographic reports, 3.6% of transesophageal echocardiographic reports, and 7.1% of stress echocardiographic reports. Statements that were inconsistent were found in nearly one quarter of reports. They also pointed out that it should be possible to use analysis of facilitated reporting codes to “flag” contradictory or inconsistent finding code pairs so that the interpreting physician could review and correct any inadvertent errors before releasing the final report to the electronic medical record, although they did not test those potential approaches to determine if rates of discrepancies were reduced thereby.
We believe that several points need to be emphasized with regard to the findings of Chandra et al. First, the majority of reports investigated in this study did not include discrepancies. Although one can look askance at the fact that discrepancies were not rare, the majority were inconsistencies rather than frank contradictions. It is not possible for a given patient to have “mitral prolapse” and “no mitral prolapse” at the same time, but there may be legitimate reasons why a given patient could have both “left ventricular enlargement” and “normal left ventricular ejection fraction.” It is important to note that reporting discrepancies were detected only in a minority of reports examined and that most of these were not contradictory statements.
Second, nonetheless, the importance of accurate reporting should not be undervalued. The fact that some reports did include discrepancies is a valuable observation. Identifying the problem is only the first step to correcting it, however. It certainly seems possible that the approach used by Chandra et al could be adapted so that the facilitated reporting software would alert the interpreting physician to the presence of contradictory or inconsistent finding code pairs. This “system prompt” would allow the interpreter to reevaluate those statements and either to correct them in the final report or to provide a cogent explanation justifying the diagnostic statements.
Third, in reality, identifying discrepancies in echocardiographic reporting should be viewed as an opportunity for continuous quality improvement. We believe that the majority of reporting discrepancies were inadvertent and that it would be counterproductive to use the sort of analysis described by Chandra et al to reprimand the individual interpreters. Instead, it would be most productive to use this tool to implement quality improvement measures. One should think of continuous quality improvement as an ongoing process of reviewing performance, identifying variations in performance, analyzing the reasons for these variations, implementing methods for influencing those variations that are deemed to need modification, and then reviewing performance to determine the efficacy of any interventions. The purpose of continuous quality improvement is not to identify “bad guys”; undesirable variations are viewed as an opportunity to identify problems, devise solutions, and make improvements. In our view, the key finding of Chandra et al’s study is that by careful analysis of the patters of use of finding codes, discrepancies in echocardiographic reporting can be identified. This ought to facilitate the development of solutions to address reporting discrepancies and ultimately to reduce them.
Fourth, the approach described by Chandra et al involved a single facilitated reporting system as implemented at a single academic institution. It is not fair to assume that the exact same approach could be used “across the board” by all laboratories that use facilitated reporting systems. In their correspondence with the editors, the authors pointed out that there is no standardized set of finding codes, that >5000 rules were generated from their center, and that with the current state of the field, a new set of rules would need to be generated for another center that wanted to evaluate its own performance. To our knowledge, different reporting systems are not all organized in an identical manner, and some of these systems can be customized so that the “diagnostic statement” that is triggered by choosing a given finding code can be modified. In other words, in one laboratory, the finding code “normal LV” might generate the statement “Left ventricular size and function are normal,” whereas another laboratory might customize the diagnostic statement to read “Left ventricular volumes are normal and ejection fraction is xx%” (where the measured ejection fraction would be inserted by the system). The latter statement might be consistent with a subsequent statement about reduced global longitudinal strain, for example, while the former statement might be considered “inconsistent.” The general approach of identifying pairs of finding codes that should not be selected together in generating a single report would seem broadly applicable, but as usual, the devil is in the details. We would hope that manufacturers of facilitated reporting systems build into future products the ability to identify inconsistent code pairs and a mechanism for alerting the interpreting physician, for example, by blocking the ability to “finalize” the report until the discrepancy has been addressed and resolved in some manner. Anybody who has tried to purchase something online and to pay with a credit card realizes that there are ways to “block” transactions when something in the process is discrepant. We would hope that the same approach might be applied to facilitated reporting.
Last, the issues raised in this article about discrepancies in echocardiography report generation are not unique to echocardiography. In fact, the same approach could be applied to any other type of diagnostic testing, and even more broadly. If one were to analyze reports of other diagnostic studies, one would no doubt occasionally find discrepant statements in these as well. It would be wrong, and unfounded, to conclude from the work of Chandra et al that echocardiographic reports are replete with mistakes and are not to be trusted. To paraphrase a familiar aphorism, “Mistakes happen!” We are aware that many years ago, when copies of the bible were made by hand, on occasion an inadvertent copying error was introduced. Of course, because the bible is the bible, such errors were carefully copied in subsequent versions. Electronic medical records have become the bible of health care; unfortunately, an inadvertent error in the electronic medical record will be (1) legible, (2) easily accessible to other caregivers, and (3) easily copied and pasted into future entries. Those of us who read electronic records, whether in the echocardiography laboratory, in the clinic, on an inpatient ward, or in our offices, have occasionally come upon what appear to be mistakes. Using variations of the analytic approach used by Chandra et al, it might be possible to develop methods for detecting, and ultimately rectifying, discrepancies in electronic records of all kinds.
In summary, the study of facilitated reports of echocardiographic studies in this issue of JASE not only provides a snapshot of the frequency and types of discrepancies found in a high-volume academic echocardiography laboratory but also, and in our view more important, demonstrates that it is possible to identify inconsistencies and contradictions in records by analyzing patterns of use of elemental codes. We find exciting the prospect that such an approach might be used to flag errors, and to reconcile them, before a final report is entered into the electronic medical record. Although much work remains to be done to standardize and implement such a feature into facilitated echocardiographic reporting systems, and to test it prospectively, this is an important area that deserves additional attention.