Fig. 13.1
Threat and error model proposed by Helmreich [8]. Immediate threats are factors outside the control of the cockpit crew that act to increase the complexity of the situation and therefore predispose to error occurring. Errors must necessarily be recognized in order for a rescue attempt to correct for the error. The rescue may completely mitigate the error (inconsequential error). On the other hand, a consequential error is one that leads to an unintended state (either ineffective or mismanaged rescue attempt, or a completely unrecognized but important error). The unintended state may not itself be dangerous, but serves as a threat for cycles (“C”) of additional error or unintended states. It is these cycles that serve as the stage for amplification of the situation and potential catastrophe. Over-arching organizational and cultural factors may serve as latent threats. We propose that the same model holds true in high-stakes medical specialties. Aviation threats have corollaries in medicine (blue), as do error types. Threat and error management strategies, such as crew resource management serve to: (1) predict and manage threats, (2) minimize human error, (3) increase error recognition, (4) improve team coordination and resource utilization during rescues, (5) maximize safety margins during unintended states and (6) recognize and break cycles of error-unintended states
Failure to perceive is grounded within resilience engineering, which provides a new way of thinking about safety. Complex socio-technical systems are inherently risky. Rather than considering a system or organization to be inherently safe by following a set procedure or rules, safety is something that people in complex environments create, by understanding competing demands and variations in conditions.
Lessons form the Flightdeck – Vigilance and Communication
In the cockpit, error is a human action or inaction that leads to a deviation from the intended or expected circumstance that then leads to a reduction in safety margin and increased probability of adverse event [6]. They are common, ubiquitous and, in accordance with the systems approach, can be considered inevitable [7]. Fly-on-the-wall assessments of >3,500 commercial airline flights by trained observers conclude that 80 % contain error [8]. Fortunately, during routine workload, few crews perform “poorly” and instead 75–80 % crews are graded as either “good” or “outstanding”. However, during high-intensity scenarios, there is a significant increase in the number of crews performing “poorly” [9], but also a significant increase in the number of crews who perform “outstandingly”. Understanding the working patterns high-functioning crews is central to understanding effective threat and error management.
Crews that excel in crisis situations are highly vigilant and highly communicative. Review of >10,000 utterances has revealed that during abnormal situations, the number of utterances increases on average by a factor of twofold. The proportion increase in number of utterances during these periods of stress correlates significantly with increased performance, fewer errors and – especially – with fewer consequential errors [10]. Importantly, the number of problem-solving utterances – a surrogate for vigilance – is highly linked to highly performing crews. Irrespective of workload complexity, outstanding captains devote one third of all utterances to problem solving – even in routine, low-intensity flight segments. This is in contrast to crews performing poorly or with mid-proficiency, where 5–10 % of utterances related to problem solving [10]. During both high- and low-intensity situations, the outstanding captains vocalize problem-solving more than poorly functioning captains by a factor of 7–8 [10]. Problem-solving communications had consequently become the centre of modern threat and error management techniques. To quote Robert Helmreich [10], “it is not that effective communication can overcome inadequate technical flying proficiency; rather, good rudder and stick skills cannot overcome the adverse effects of poor communication.”
Threats May Prompt Error
To paraphrase an Australian pilot: “a threat is anything that takes you away from the ideal day.” Errors may occur completely unprompted, but often occur as a result of a mismanaged threat. Strictly speaking, threats are external influences that increase the operational complexity of the planned procedure or journey [6]. They are the risk factors, therefore, for errors occurring. Understanding and mitigating threats are central to the systems approach of threat and error management. In the airline cockpit, threats tend to fall into one of five distinct categories [8] (Table 13.1). The most common relate to terrain or adverse weather. Observational data from commercial airline cockpits indicate about ~75 % flights face one or more threat (range 0–11; median 2) and approximately 10 % of these threats are mismanaged, therefore leading to an error.
Table 13.1
Classification and prevalence of threat and error subtypes observed during simulator studies and direct observation of >3,500 commercial airline flight segments [8]
Threats | Errors | ||
---|---|---|---|
Aviation | Medicine | Aviation | Medicine |
Terrain – 58 % | Morphology | Violation of SOP – 54 % | Non-adherence to guidelines, SOP |
Weather – 28 % | Co-morbidity | Procedural – 28 % | Procedural |
Aircraft malfunctions – 15 % | Equipment | Communication – 7 % | Communication |
External errors – 8 % | External factors | Proficiency – 6 % | Proficiency, knowledge or skill |
Air traffic control, ground crew | Ward, admin, etc | ||
Operation pressures – 8 % | Operational stressors | Decision error – 7 % | Decision or judgment |
Fatigue, crew stresses | Fatigue, scheduling, etc |
Latent threats are a particularly important type of threat from a system error management perspective. They are operational, management or training conditions which indirectly lead to circumstances that exacerbate the risk of error [11]. Their importance lies in the fact that unless they are addressed, it is highly likely that errors will recur. To use James Reason’s analogy [3]: active failures are like mosquitos – they can be swatted one by one, but keep coming. A better remedy is to drain the swamps from which they breed. The swamps represent the latent conditions from which many of the active threats breed.
Categories of Error
In the airline cockpit, errors tend to fall into one of five category types [8] (Table 13.1). By far the most frequent type of error documented during in-flight cockpit observation is violation of a “standard operating procedure” [8]. This is most commonly intentional non-compliance, for example knowingly omitting or abbreviating a standard checklist. Whilst such non-compliance may reflect a cavalier work ethic, contempt for controlling regulations or misperceptions of personal invulnerability (which, like surgeons, pilots have been shown to exhibit [12]) it should be recognized that over-enthusiastic introduction of protocols will in itself breed non-compliance and disdain for the philosophy of systemic error control. Procedural errors reflect a true “mistake” in the execution of a certain task (often termed “lapses”), for example touching the wrong key when entering coordinates, or reading the wrong line of data from a chart. “Proficiency” errors are the least comforting, as the name implies a personal deficiency in skill level. Perhaps, though, they are the most important to acknowledge: denial of failures in proficiency (a tendency in medicine) is to completely ignore the huge innate fallibility of humans.
Unperceived Failure: Unrecognized or Ignored Errors
An error may be actively ignored or not even be recognized. Of course, errors that are either ignored or unrecognized cannot be managed successfully; they will only be inconsequential either because they are genuinely trivial, or through pure luck. Conceptually, therefore, unperceived errors are perhaps among the most important target for error management, as error recognition is a pre-requisite of error rescue. In certain situations, humans may have reasonable judgment regarding when an error can be ignored. However investigations into intentional non-compliance (by definition ignored errors) in the aviation industry errors raise serious doubts about this general assumption. More than 40 % of approach and landing accidents involve intentional non-compliance of a standard operating procedure [8]. Perhaps more importantly, pilots who commit intentional non-compliance errors are 25 % more prone to other types of error than pilots who adhere to standard operating procedures [8]. Therefore, non-adherence represents a general propensity to err.
Error Rescue and Unintended States
For those errors that are recognized and not ignored, there is by definition some attempt made to rescue or contain the error. These error rescue actions may lead to: (1) no change in the situational circumstances (inconsequential error), or otherwise (2) an unintended state (consequential error). Importantly, an unintended state may not itself be a danger at all (for example a perfectly safe, but different, flying configuration in an aircraft). However, a central premise of the threat-error model described by Helmreich [8] is that an unintended state is itself an important threat that significantly increases the propensity for further errors and additional unintended states occurring.
In commercial airline cockpits, 25 % of errors are considered consequential: 19 % lead to an unintended state, whereas 6 % of errors lead directly to a second error [8]. This cycle of unplanned circumstances and errors is considered to be the stage for a catastrophe (Fig. 13.1). Recognition of the error-unintended state cycle requires extreme vigilance, as each unintended state may itself not seem dangerous or unfamiliar. Essentially, the gradual deviation away from the planned or expected journey should indicate that an error-unintended state cycle might be occurring. The over-arching goal in these circumstances should first be to maximize safety margins and then to problem-solve. It should be noted that unintended states might not necessarily be preceded by an error; they may be simply a consequence of appropriate crew actions in response to various threats (weather, terrain, external errors, for example). A third of all flights contain unintended states, and 5 % of landing approaches are considered to be frankly unstable [12]. One third of all unintended aircraft states are considered to be the end result of a chain from threat leading to error leading to unintended state [12].
< div class='tao-gold-member'>
Only gold members can continue reading. Log In or Register a > to continue