Fig. 22.1
Translational research phases
Although definitions and views about the scope of implementation science vary, a key feature is its approach to the problem of persisting quality gaps from the perspective of potential solutions, rather than problems. While quality and safety improvement generally proceeds from the starting point of suspected quality gaps that must be documented and diagnosed to guide the selection or identification of solutions, implementation science generally, and the commas and preceding and following it begins with the observation that effective, evidence-based practices (including practices for diagnosing, treating and managing disease) are under-utilized and require proactive efforts to facilitate broader implementation (or, alternatively, that ineffective practices are over-utilized and require efforts to achieve de-implementation) [22]. Consistent with this orientation and its close association with clinical research and the study designs and methods valued in clinical research, much of the effort within the field of implementation science has focused on experimental evaluation of specific implementation interventions or strategies, and the contextual factors—enablers and barriers to success—and other effect modifiers influencing their effectiveness in different circumstances.
Implementation Science and Improvement Science Contributions and Challenges
The emergence of the new discipline of implementation science has driven a rapid increase in studies of how new scientific discoveries are incorporated into new programs and policies and ultimately into routine practice by clinicians [23]. This ultimately is about engaging clinicians to accept and adopt new practices. Although findings and insights from this growing body of work continue to accumulate, several challenges remain and society continues to benefit from only a small fraction of its considerable investment in research in health care and public health.
The optimal application and the overall value of implementation science and improvement science in addressing quality and performance gaps in health care remain uncertain and are in flux. As policy and practice leaders have become increasingly aware of implementation science – and as they consider its potential to augment QI and improvement science approaches to accelerate progress – they quickly discover a lack of guidance or common vision in understanding the relationships and potential synergy between the two fields. This deficiency in published literature and the lack of useful guidance in combining the two closely-related but seemingly separate fields of inquiry reflect a degree of parochialism and an excessive inward-facing orientation by researchers and other experts in both domains. Simplistic characterizations of each field suggests that improvement science focuses too narrowly on the development of ad-hoc solutions to context-specific, unique problems – with insufficient attention to generalizable knowledge and broadly-applicable solutions and insights – whereas, implementation science is viewed as incorrectly assuming a level of homogeneity and stability in quality problems that rarely exists, and as seeking robust and very broadly applicable solutions that are similarly rare. While each of these views over-states the limitations of the improvement and implementation science fields, they both reflect underlying differences and similarities that offer potential value in developing future effective approaches to the problems of persistent quality gaps in health care.
Recognition of heterogeneity and complexity is inherent in improvement science, including rapid-cycle improvement methods, such as Plan-Do-Study-Act (PDSA) cycles that involve iterative cycles of planning, design, evaluation and refinement of improvement strategies [24]. These approaches generate context-specific evidence regarding barriers to improvement and help identify solutions and assess their effectiveness using quick turnaround in time and resources. They represent a significant advantage over implementation science approaches that assume homogeneity and resist incremental, real-time refinement and other threats to internal validity and external generalizability. An improvement-science approach recognizes the need for customized, site-specific and context-sensitive solutions based on careful study of current practices and local mental models, and careful surfacing and recognition of barriers to improvement. The implementation-science approach would contribute insights and guidance for addressing heterogeneity and guiding adaptation based on the all-important recognition of the context and contextual influences on the success and uptake of implementation processes and outcomes. Context is recognized as important within both fields (improvement and implementation sciences), and is the focus of a growing body of work within implementation science to identify and adapt social science theories and theoretical principles linking contextual factors to practice change process and outcome differences [25–27].
Improvement and implementation sciences offer complementary approaches in other respects as well. Implementation science aims to improve health care quality in part through the development of insights regarding variations in implementation success and the factors driving these variations. Implementation science theories and frameworks, such as the Promoting Action on Research Implementation in Health Services (PARIHS) framework [28] and the Consolidated Framework for Implementation Research (CFIR) [29], identify key factors driving variations such as features of evidence-based practices and innovations to be implemented, attributes of implementation settings and contexts, and the implementation and practice change strategies to be deployed to achieve improvement. Generalizable findings regarding the impacts of contextual factors (such as leadership, organizational culture, staffing sufficiency and stability, other resources and logistical arrangements) will enhance implementation approaches. Similarly, a sustained commitment to thoroughly assess quality gaps and their underlying causes will help improvement proponents increase the likelihood that a given practice change strategy will be compatible and effective in specific settings and for specific quality problems. This knowledge should also prove useful in reducing the number of improvement cycles required to achieve success. It will support a more nuanced, evidence-based approach to the development and evaluation of a series of potential solutions within a rapid-cycle, iterative approach [30].
Despite the potential for mutually beneficial contributions and synergy between the improvement and implementation science fields in their current forms, considerable development and enhancement are needed. Both fields are challenged by the need to better understand and achieve maintenance (sustainability) of improvements and to facilitate scale-up and spread of effective solutions across large multi-site systems and geographic regions. The implementation science field must balance its emphasis on experimental studies and rigor with increased study of naturally-occurring implementation processes to derive the insights they offer regarding factors influencing implementation success across diverse settings, quality problems and implementation strategies. And, both fields require additional development of tools and approaches for understanding the mediators, moderators and mechanisms of practice change.
Maintenance, Sustainability, Scale-up and Spread
An improvement program should ideally over time sustain various elements, including its activities, community-level partnerships, organizational practices, benefits to its clients, and the salience of the program’s core issue. These are called “sustainability outcomes” by Scheirer and Dearing [31], and reflect the various ways that a program can continue to achieve its intended effects. However, this highlights the question of how a program can position itself to best ensure that these sustainability outcomes can be realized. It has been proposed that sustainability itself is the small set of organizational and contextual factors that build the capacity for maintaining a program over time. That is, sustainability is the ability to maintain programming and its benefits over time despite underlying factors that act to undermine and extinguish the effects of the intervention.
Despite its importance for outcomes, sustainability has received relatively little research attention. Findings and insights regarding improvement and implementation are generally documented and learned through reports of successful improvement projects as presented at conferences and seminars and via published articles in research, policy and practice journals. Because the majority of these present short-term impacts only, evidence regarding long-term maintenance of practice change and improvements is limited. Anecdotal evidence, however, suggests that long-term maintenance is rare and difficult to achieve and nearly impossible to document using traditional evaluation methods. The ultimate value of improvement and implementation science requires sustained, ongoing improvements (and measurements) rather than short-term benefits, and thus requires greater attention to the study of, and support for, institutionalization and maintenance of practice change. Recent publications in the implementation science literature advocate greater attention to sustainability and offer frameworks and guidance for studying and achieving sustainability, and represent important contributions to the growing recognition and need for sustainability research, practice and success [32, 33].
Closely related to sustainability challenges are questions of scale-up and spread. The heterogeneity of practice settings and quality problems limits the direct applicability and likely broad effectiveness of improvement and implementation strategies found to be useful in one or a small number of settings. Implementation researchers who recognize the rarity of spontaneous diffusion and widespread adoption of innovations in medical care and health care delivery often neglect to recognize that implementation strategies shown to be effective in one set of sites are unlikely to spontaneously diffuse to other sites, and that their suitability and likely effectiveness may be limited even if some natural spread occurs [34]. Research on scale-up and spread barriers, processes and strategies is limited in the same manner as research on maintenance and sustainability, and represents another “new frontier” for the fields of improvement and implementation sciences as they endeavor to better support more effective and evidence-driven public health policy and practice goals for quality improvement [35].
External Validity and Observational Research
Consistent with its foundations in clinical research and clinical research stakeholders’ preference for rigorous, study designs prioritizing internal validity (e.g., randomized controlled trials), many implementation scientists similarly prefer experimental, interventional approaches to evaluate implementation strategies and understand implementation barriers and processes [36]. Although these maximize internal validity and meet standard scientific standards for rigor in study design, they entail considerable compromise in reduced external validity, policy and practice relevance [37, 38]. Investigator-initiated and directed implementation and improvement projects often require special arrangements and time-limited support and resources, including many that are not easily sustained nor replicated following the conclusion of the project. These additional resources and support preclude a valid conclusion that the specific practice change strategy under study is responsible for observed improvements in quality, rather than the special (and non-sustainable) support and resources provided. Observational studies of “natural experiments” and other naturally occurring implementation and improvement processes permit study of implementation and improvement processes and strategies without the complicating addition of artificial constraining circumstances and support. Increased interest in observational study designs that minimize threats to internal validity [39] will facilitate implementation research that balances internal and external validity and is better able to generate relevant and sustainable policy and practice insights.
Research on Mediators, Moderators and Mechanisms of Practice Change
The theories, tools and methods of implementation science prioritize implementation interventions and strategies and favor summative evaluation research designs and methods to evaluate their effectiveness and impacts. This focus on questions of whether improvement occurred emulates research approaches optimized to evaluate clinical interventions such as drugs and devices, but is less appropriate in situations characterized by high levels of heterogeneity and adaptability. These features of quality problems and solutions are implicitly recognized by improvement science methods and require a focus on the formative processes and mechanisms of improvement–formative evaluation–in addition to a focus on impacts and outcomes. Rapid-cycle improvement approaches are able to accommodate local variations in quality problems and causes, differences in contextual factors such as organizational resources and policies, and other factors that contribute to significant variations in improvement outcomes across sites and over time.
< div class='tao-gold-member'>
Only gold members can continue reading. Log In or Register a > to continue