Describe the initiatives which influence




















Incorporating theory, research evidence and practical knowledge, realist inquiry is ideal for understanding the issues surrounding implementation in complex healthcare settings. This theory-driven interpretive approach seeks to explain the causes of intervention outcomes and patterns of outcomes and effects, by evaluating knowledge and data from a range of sources [ 26 ]. These realist conceptualisations of context refer to any characteristic of the individual capacities of key actors; the interpersonal relationships between stakeholders; the institutional setting; and the wider societal, economic and cultural infrastructure.

This appreciation of context and complexity is significant: the realist approach acknowledges that because interventions are governed and conditioned by the contexts that they are embedded in, there is an inherent challenge with regard to transferability to other settings [ 26 ].

This is because factors within particular contexts enable certain mechanisms to trigger outcomes and therefore interventions cannot simply be transferred from one context to another and be expected to achieve the same results [ 28 , 29 ]. We followed a template adapted from Pawson [ 26 ]: 1 define scope of review and develop theoretical framework exploratory background literature searching, stakeholder consultation, theory development ; 2 theory-driven purposive search for evidence; 3 appraise evidence and extract data; 4 synthesize data and draw conclusions; 5 disseminate findings.

The realist review process is iterative and non-linear, with considerable overlap between stages and work on different steps undertaken simultaneously Fig. No changes were made to the published review protocol [ 33 ].

The objective of the first stage was to understand the scope of the review and develop the programme theory. This involved a number of interconnected iterative processes: scoping exploratory background literature searching ; mapping defining key themes and concepts, conceptualising context ; and consultation with stakeholders and experts.

All aspects of this preliminary work informed the programme theory development. A multi-disciplinary advisory group of academics and improvement practitioners was set up to oversee the review, monitor progress, develop consensus and contribute to theory development and interpretation of findings.

In the initial exploratory stage, the review team conducted background searches and consulted with key stakeholders from policy, practice and academia, to map the terrain, refine the research question, and clarify the focus and breadth of the review. This scoping exercise included the identification and scrutiny of key publications examining the role of context in healthcare quality improvement [ 10 , 11 , 14 , 16 , 23 , 34 , 35 ].

All except one participant was located in Scotland; the other contributor was based in England. Participants held a range of roles within improvement, and several held dual posts spanning both the NHS and academia. The participants provided representation from the three system levels: macro policy , meso national organisation implementation and support roles and micro local implementation remit.

Accordingly, their viewpoints reflected the various ways in which the different system levels exerted influence on their improvement-related activities. Participants were asked for their views on the role and influence of context in the implementation of QI initiatives. Fifteen interviews were carried out until saturation was reached. The inclusion of stakeholder views and thinking around the impact of context in improvement during the exploratory stage provided rich contextual information, and key themes emerged to support the development of the initial programme theory.

Findings from the exploratory search and insights from stakeholders and experts generated a number of potentially relevant contexts. The provisional context map formed the basis of further stakeholder engagement. Mapping out the quality improvement landscape within Scotland to represent the emergent theory enabled stakeholders to engage in the exploration of potential contexts, mechanisms and outcomes across macro, meso and micro system levels. This process advanced the initial theory into a generalisable programme theory, applicable to a range of improvement settings.

Hypothesizing how improvement activity within and between the different contexts was likely to play out in terms of the associated mechanisms and outcomes; we developed a realist programme theory, expanding the provisional context map. The programme theory was our conceptualisation of the healthcare improvement landscape, and the role and influence of contextual factors, representing the interactions between multiple components and multiple levels within a complex system, and illustrating context, mechanism and outcome relationships and the patterns of outcomes and effects.

The programme theory, which formed the theoretical framework for the subsequent stages of the review, is summarised in Fig. The realist approach aims to retrieve sufficient evidence to answer the research question and achieve theoretical saturation, as opposed to a fully comprehensive search [ 26 ]. Review evidence from a range of sources was drawn from iterative, broad-brush exploratory searches; these preliminary searches identified a large number of potentially relevant articles that were appraised for inclusion.

Further evidence was located via a broad range of methods: electronic database searches, using index terms and free text; reference scanning; citation tracking; searching websites of relevant peer-reviewed journals for QI reporting; electronic alerts, i. Stakeholders with knowledge and experience in delivering QI initiatives and education, from a wide range of organisations, including the National Health Service NHS , were approached to support and contribute to the search strategy.

The search strategy was purposefully broad and driven by the programme theory as it developed and was refined through the course of the review. At the review outset, a number of broad-brush exploratory scoping searches were carried out, yielding a large number of full-text documents, including reviews. During programme theory development, further, mostly informal, searches were conducted iteratively reflecting the current thinking about the programme theory at each point as it evolved and additional full-text documents were retrieved and stored, in anticipation of the ensuing stages of the review.

These preliminary searches produced a large number of potentially relevant articles. Assessment of these existing documents was carried out prior to further searching; six of these were included in the final synthesis. Once these documents were screened, selected and appraised, further searches were carried out, building on the evidence generated by the preliminary searches, in order to find additional pertinent evidence to further test and refine the programme theory.

Hence, this second search phase identified some additional articles e. Quality improvement in healthcare is a relatively new area, [ 1 , 8 ] furthered by the work of the Institute for Healthcare Improvement in the USA [ 1 ]. Further, findings from the background search suggested that the evidence base prior to this point would be unreliable, given that during the first decade of the twenty-first century, QI was considered a relatively new and developing field for health services research [ 8 ], and as a result, contextual issues would be less likely to be explicitly acknowledged or reported in older studies.

The realist selection and appraisal process differs from a traditional systematic review. Assessing whether research is fit for purpose according to relevance and rigour is the realist alternative to quality appraisal in a systematic review. Decisions about rigour and relevance were made on the basis of potential contribution s of the study either as a whole or a section could make to the review. In a realist review, the unit s of assessment is not each included study itself or the intervention it describes, but any sections of the study that are relevant to underlying theory and context-mechanism-outcome evidence.

Within a document, different kinds of evidence may be relevant to different aspects of the review or the programme theory [ 32 ]. Decisions were based on, for example whether papers provided any contextual data, data relating to potential mechanisms, identifiable outcomes or CMO examples either implicit or explicitly author-identified.

The various types of evidence e. Assessment of rigour therefore focused the extent to which studies provided a detailed description of methods and the level of generalisability of findings. The methodological limitations of any studies included in the review or any particular issues around data quality were noted and considered during the analysis and synthesis. Data extraction, analysis and synthesis was an iterative process beginning with familiarisation and understanding of each study.

Each included study was read and re-read, initially for familiarisation and then to assess its relevance to the evidence relating to underlying theory and relevance to the research questions.

Within each document, relevant passages containing key evidence were highlighted, annotated and coded to identify contexts, mechanisms, outcomes and CMO configurations. Documents were also examined to capture explanatory accounts, themes, concepts and any other relevant data that might contribute to theory refinement. A data extraction template and sample extraction table is available on request from the corresponding author.

Using processes of abstraction and conceptualisation, the reviewers compared and contrasted the evidence, looking for patterns of CMOs across the data that were able to support, contradict or inform the programme theory. Recurring themes were also identified and used to guide the rest of the review process as data extraction and analysis progressed.

Thirty-five studies published between and were identified for inclusion Table 2. Although a variety of study designs were represented, studies were predominantly qualitative, including two realist evaluations. Five were mixed-methods, and two were embedded in wider studies. One study used a longitudinal design, and two involved secondary analysis. Thirteen studies specifically aimed to explore the influence of context or contextual factors. Others addressed contextual issues indirectly in the form of organisational culture, barriers and facilitators to implementation, or improvement capacity and capability.

A variety of standard QI methods and tools were described across the studies. Eleven studies reported on QI collaborative models. Reporting on QI methodology in a small number of the studies was of poor quality, with a lack of detail on the specific improvement methods used. In this section, the various representations of context across the included studies are first explored. Then, we describe the four key domains that emerged from the data—leadership, organisational characteristics, change agents and multi-disciplinary collaboration—reflecting contextual influences at levels of the system.

Findings from the evidence synthesis further distilled the four domains into eight key contextual factors: leadership, organisational culture, individual skills and capabilities, organisational capacity and capability, data and technical infrastructure, readiness for change, championship and relationships. The contextual factors were shown to interact across healthcare system levels macro, meso and micro , during the stages of improvement.

A generalisable theoretical model was then developed to illustrate the interactions between contextual factors, system levels and the various stages of the improvement journey along a trajectory where improvements are planned, implemented, sustained and spread. Within the studies reviewed, context was represented in a variety of ways within the literature, highlighting its dynamic, multi-dimensional and highly variable nature.

It was also used as a means to demonstrate system complexity, through the interactions at the micro, meso and macro system levels [ 38 , 44 , 46 , 47 , 49 , 54 , 57 , 64 , 67 , 68 , 71 ], supporting the programme theory. Multiple interactions between different aspects of context were reported across the evidence, for example the influence of macro-level contexts on the micro system or the tensions and trade-offs between the two [ 38 , 43 , 49 , 52 , 53 , 57 , 60 , 63 ].

Some studies attempted to define context within a hierarchy of factors [ 47 , 54 , 58 , 71 ]. Many studies considered and compared pre- and post-implementation contexts [ 38 , 41 , 46 , 48 , 67 ]. Awareness of the potential impact of implementation contexts and local conditions featured widely in the literature in a range of forms.

Interrelationships among contextual elements acted as barriers to uptake at some sites and as facilitators at other sites, and as such were a predictor of intervention uptake [ 58 ]. Some studies explored implementation in multiple settings, highlighting that conditions for readiness, underlying mechanisms and outcomes of the same intervention could be very different depending on the organisational context [ 43 , 52 , 53 , 56 , 60 , 66 ].

These assessments aimed to build an in-depth understanding of the setting internal context , tasks, outcomes and environment into which the initiative would be introduced. Assessments included the examination of organisational structures and processes, i. In some studies, teams utilised QI methods as tools to help them understand and analyse the complexity of their systems [ 59 ], whereas others used specific frameworks to systematically evaluate their local context and identify relevant contextual factors to address, e.

Using the programme theory Fig. Analysing the data against the programme theory, four key domains emerged— leadership, organisational characteristics, change agents and multi-disciplinary collaboration.

It included organisational structures, processes and human resource functions. Awareness of the potential impact of organisational contexts and local conditions are featured widely in the review and included examples of context assessment.

Multi-disciplinary collaboration featured very strongly across the included studies, despite playing a lesser role in the programme theory, where it was conceptualised as both mechanism and outcome. Interconnected elements within this domain included professional diversity, relationship building, teamwork and communication; these reinforced other aspects of the programme theory.

These four domains reflected contextual influences at all levels of the system. Examples from the realist exploration of how , why , when and for whom these contextual domains are important to improvement initiatives are provided in Table 3. A number of key mechanisms that influenced the delivery of quality improvement initiatives outcomes were identified from the literature, supporting the programme theory.

The mechanisms were applied to different system levels Table 4. The map of the theoretical and conceptual landscape of healthcare QI was redrawn with the integration of tacit knowledge from stakeholders, to produce a broader, more descriptive model Fig. Given the interpretive and subjective nature of the realist approach [ 72 , 73 ], this sense-checking exercise was invaluable. Revising the context map enabled progression beyond the data domains, towards an enhanced understanding and the identification of contextual factors and their influence and impact.

First, North American and European countries may have more resources to fund this type of work—from large tech companies, wealthy governments, major foundations, and so on. Second, our research was probably skewed toward these geographic areas due to our own limited linguistic capability and networks to reach people in Africa, Asia, and Latin America. Societies on these continents presumably face an equal or even greater threat from influence operations compared to the Western world.

It is therefore important to extend resources including outreach to actors interested in countering influence operations in these regions. Moreover, cross-region collaboration could help to spread knowledge of emerging malicious techniques and potential countermeasures. To facilitate new partnerships and support, future research should aim to identify more initiatives outside of North America and Europe.

Nearly half of all initiatives in our dataset are housed in civil society organizations including think tanks, NGOs, charities, and other nonprofits.

A large role for civil society is appropriate, because influence operations prey on societal vulnerabilities that cannot be fully addressed by governments or companies alone. Reliance on short-term donations and grants makes it very difficult for leaders to plan and conduct projects and recruit and retain personnel. Only a small fraction of initiatives in our dataset 5 percent are government-run.

This is striking because experts overwhelmingly believe that governments should lead the counter—influence operations effort, according to a PCIO meta-analysis of policy papers published since Our research could have undercounted governmental initiatives—for example, those that are not publicly announced or clearly labeled as focused on influence operations. Regardless, governments should aim to become more visible leaders in the field. The initiatives in our dataset perform a variety of different functions.

They also use qualitative methods , such as interviews with participants, to better understand the meaning and value of efforts. Used together, quantitative and qualitative information weave a rich tapestry of understanding around the initiative's efforts, and offer a solid understanding of the community-level outcomes.

They are much more powerful together than either could be alone. Unfortunately, it usually takes so long to see if the initiative has really moved the bottom line that this information isn't useful for making the day-to-day improvements initiatives need.

This is why we recommend documenting intermediate outcomes such as changes in the community or broader system.

Measuring community changes--new or modified programs, policies, or practices -- assists in detecting patterns to see if the initiative is helping to create a healthier environment. Finally, evaluators help community initiatives spread the word about effectiveness to important audiences, such as community boards and grantmakers. Evaluators help provide and interpret data about what works, what makes it work, and what doesn't work.

Ways to get the word out may include presentations, professional articles, workshops and training, handbooks, media reports and on the Internet. Identifying local concerns helps communities decide on and develop strategies and tactics. These, in turn, may guide implementation of interventions, actions, and changes. Important parts may be adapted to work better in the local community, and important changes may be sustained.

This should improve the community's ability to address current and future issues. It may also help obtain the initiative's long-term goals, and at the same time improve researchers' understanding of how to get things done. This may help promote adoption of the entire initiative or its more effective components by other communities.

All of these steps may influence each other and help decide what the community will do next. Research and experience in the field provide us with recommendations for community evaluation. These 34 specific recommendations are grouped into categories that follow the five phases of the catalyst and logic models:.

These recommendations are directed to a wide audience that includes both practitioners, especially members of community initiatives, and policymakers, including elected and appointed officials and grantmakers. Community evaluation offers two overarching benefits. First, it helps us better understand the community initiative, and second, it improves the community's ability to address issues that matter to local people.

This evaluation perspective joins the traditional research purpose of determining worth with ideas of empowerment. In community evaluation, community members, grantmakers, and evaluators work together to pick the best strategies for the community. The specific mix chosen is determined by several things: the issue to be addressed, the interests and needs of those involved, the resources available for the evaluation, and what the initiative is doing.

The evaluation is designed very carefully to answer the following: How well does this help us understand and contribute to our ability to improve our community?

For example, an injury prevention initiative might work with the local clinic to assess risk behavior with surveys and determine how many deaths and injuries occurred that were related to violence, motor vehicle crashes, or other causes.

Evaluation might be very different for a child welfare initiative, however, which might find it too expensive to watch parents and children interact, or not be able to afford a behavioral survey. Instead, it might collect information on the number of children living below the poverty level or other measurements of children's well-being. Ideally, community evaluation is an early and central part of the initiative's support system.

At the beginning, it helps the group decide on goals and strategies. Later, the evaluation team can document the community's progress towards its goals. Communities often have a local support system, which might include things such as financial resources or service networks, which help make it possible for the initiative to make a difference in the community. Community evaluation can help communities recognize their own abilities to bring about change, and then to act on that knowledge.

The community is in a partnership with the evaluation team, with both working together to understand and improve the initiative. Communities identify and mobilize existing resources to bring about changes, and members also help document them.

By documenting these community or systems changes, community evaluation can prompt community members and leadership to discover where change is and should be occurring. When communities are not making things happen, however, the role of the community evaluation team may shift to making the initiative accountable for its actions. When not much happens over a long period, for example, evaluation information can be used to encourage leaders of the initiative to change what's going on.

In extreme cases, community initiatives may be encouraged to change the leadership of the initiative. Finally, renewal of funding -- and bonuses and dividends -- can be based on evidence of progress, with intermediate and longer-term outcomes. Detecting community capacity -- the community's ability to improve things that matter to local people -- is a particularly important challenge for community evaluation.

For example, an initiative trying to prevent substance abuse that causes many important community changes over a long period, and that then really moves the bottom line, might be said to have greater community capacity than a community whose changes didn't stick. If members of the same initiative later take on a new concern such as preventing youth violence and do so effectively, we might be further convinced of improved community capacity.

Successful community partnerships develop, adopt, or adapt interventions and promising practices that will work in their community. How interventions are adapted and implemented becomes almost as important for researchers as what happened as a result of the intervention. Relationships between scientists and communities seem to be changing. This may reflect a minor revolution in traditional modes of science and practice.

In the late 's, community-based grantmaking emerged as a new or re-discovered way to distribute resources. It awards grants to the communities to address their concerns themselves instead of to research scientists to design and implement interventions.

This researcher-controlled earlier way of doing business didn't address the multiple goals of community initiatives -- improving understanding, capacity, and self-determination. Because of this, there was a lot of unhappiness with traditional research and evaluation. Challenges about their purposes helped bring about the new community-based approaches to evaluation that we have discussed in this section.

The community evaluation system described in this chapter gives a framework and a logic model for examining and improving community initiatives. The methods include providing support, documentation, and feedback. We believe that this approach to evaluation can help local people make a positive difference in their communities. Collie-Akers, V. Analyzing a community-based coalition's efforts to reduce health disparities and the risk for chronic disease in Kansas City, Missouri.

Preventing Chronic Disease. Are You Ready to Evaluate your Coalition? Chapter Empowerment in the "Introduction to Community Psychology" addressed the different levels of empowerment, how to contribute to power redistribution, and ways to take action to make changes in communities.

The Community Schools Evaluation Toolkit is designed to help community schools evaluate their efforts so that they are able to learn from their successes, identify current challenges, and eventually allow them to plan for future efforts. An Evaluation Toolkit for The Community Mapping Program is part of the Place-based Education Evaluation Collaborative PEEC , a unique partnership of organizations whose aim is to strengthen and deepen the practice and evaluation of place-based education initiatives.

Fawcett, Ph. Francisco, Ph. Richter, M. Fisher, M. Lewis, Ph. Lopez, Stergios Russos, M. Williams, M. Harris, M. Rootman, D. McQueen, et al. Evaluation in health promotion: principles and perspectives. Ashton, J. Healthy cities: WHO's new public health initiative. Health Promotion International, I, Baum, F. Researching public health: Behind the qualitative-quantitative debate. Social Science and Medicine, 55 4 , Bracht, N. Health promotion at the community level.

Connell, J. New approaches to evaluating community initiatives. Fawcett, S. Rootman, et al. Understanding and improving the work of community health and development. Burgos and E. Ribes Eds. Guadalajara, Mexico: Universidad de Guadalajara. Evaluating community coalitions for the prevention of substance abuse: The case of Project Freedom. Harris, K. Empowering community health initiatives through evaluation.

In Fetterman, M. Google Scholar. Batalden PB, Davidoff F. Qual Saf Health Care. Overcoming challenges to improving quality. Assessing the evidence for context-sensitive effectiveness and safety of patient safety practices: developing criteria prepared under contract no.

Agency for Healthcare Research and Quality: Rockville; Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science.

Implement Sci. The influence of context on quality improvement success in health care: a systematic review of the literature. Milbank Q. Understanding the conditions for improvement: research to discover which context influences affect improvement success.

BMJ Qual Saf. What context features might be important determinants of the effectiveness of patient safety practice interventions? Article PubMed Google Scholar. An exploratory analysis of the model for understanding success in quality.

Health Care Manag Rev. Article Google Scholar. Dopson S, Fitzgerald LA. The active role of context. Knowledge to action? Evidence-based health care in context. Oxford: Oxford University Press; Chapter Google Scholar. Guidance for the assessment of context and implementation in health technology assessments HTA and systematic reviews of complex interventions: the context and implementation of complex interventions CICI framework. Accessed 3 July Achieving change in primary care-causes of the evidence to practice gap: systematic reviews of reviews.

Realist review—a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. Context and implementation: a concept analysis towards conceptual maturity.

Z Evid Fortbild Qual Gesundhwe. Fulop N, Robert G. Context for successful quality improvement: evidence review. London: The Health Foundation; Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. May C. Towards a general theory of implementation. Implementation, context and complexity. McDonald KM.

Considering context in quality improvement interventions and implementation: concepts, frameworks, and application. Acad Pediatr. Making sense of complexity in context and implementation: the context and implementation of complex interventions CICI framework. How does context affect quality improvement? In: Perspectives on context. Robert G, Fulop N. The role of context in successful improvement. Achieving change in primary care-effectiveness of strategies for improving implementation of complex interventions: systematic review of reviews.

BMJ Open. Evaluating the successful implementation of evidence into practice using the PARIHS framework: theoretical and practical challenges.

Development and testing of the context assessment index CAI. Worldviews Evid-Based Nurs. The model for understanding success in quality MUSIQ : building a theory of context in healthcare quality improvement.

Context matters: the experience of 14 research teams in systematically reporting contextual factors important for practice change. Ann Fam Med. The influence of context on the effectiveness of hospital quality improvement strategies: a review of systematic reviews.

How to study improvement interventions: a brief overview of possible study types. Dixon-Woods M. The problem of context in quality improvement. Diffusion of innovations in service organizations: systematic review and recommendations. Development and assessment of the Alberta Context Tool. Bate P.

Context is everything. Parry G, Power M. The ongoing saga of randomised trials in quality improvement. Pawson R. Evidence-based policy: a realist perspective. London: Sage; Book Google Scholar. Int J Soc Res Methodol. Quality improvement needed in quality improvement randomized trials: systematic review of interventions to improve care in diabetes.

A realist evaluation of the management of a well-performing regional hospital in Ghana. A realistic evaluation: the case of protocol-based care.

Development of a key concept in realist evaluation. The concept of mechanism from a realist approach: a scoping review to facilitate its operationalization in public health program evaluation. Lean thinking in healthcare: a realist review of the literature.



0コメント

  • 1000 / 1000